Test Report: Docker_Linux_containerd_arm64 16890

                    
                      dc702cb3cbb2bfe371541339d66d19e451f60279:2023-07-17:30187
                    
                

Test fail (10/304)

x
+
TestAddons/parallel/Ingress (40.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-911602 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-911602 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-911602 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2e1750ec-1823-431f-bffe-27c49a99ee1b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2e1750ec-1823-431f-bffe-27c49a99ee1b] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.031722817s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-911602 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.048038655s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-911602 addons disable ingress --alsologtostderr -v=1: (7.824291003s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-911602
helpers_test.go:235: (dbg) docker inspect addons-911602:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d6a064845cdd1dc5b3043efa85d0986f868e688b5538bbc68e57b49f1359c59",
	        "Created": "2023-07-17T20:15:27.660692213Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 904965,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:15:27.97163808Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/2d6a064845cdd1dc5b3043efa85d0986f868e688b5538bbc68e57b49f1359c59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d6a064845cdd1dc5b3043efa85d0986f868e688b5538bbc68e57b49f1359c59/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d6a064845cdd1dc5b3043efa85d0986f868e688b5538bbc68e57b49f1359c59/hosts",
	        "LogPath": "/var/lib/docker/containers/2d6a064845cdd1dc5b3043efa85d0986f868e688b5538bbc68e57b49f1359c59/2d6a064845cdd1dc5b3043efa85d0986f868e688b5538bbc68e57b49f1359c59-json.log",
	        "Name": "/addons-911602",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-911602:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-911602",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ff8ac2bac736d745a978d0b9f8946a6da268d25e0b062834c6ebe569afd77e2b-init/diff:/var/lib/docker/overlay2/7007f4a8945aebd939b8429923b1b654b284bda949467104beab22408cb6f264/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff8ac2bac736d745a978d0b9f8946a6da268d25e0b062834c6ebe569afd77e2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff8ac2bac736d745a978d0b9f8946a6da268d25e0b062834c6ebe569afd77e2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff8ac2bac736d745a978d0b9f8946a6da268d25e0b062834c6ebe569afd77e2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-911602",
	                "Source": "/var/lib/docker/volumes/addons-911602/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-911602",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-911602",
	                "name.minikube.sigs.k8s.io": "addons-911602",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f60ecd9052d61a3875d67e7ae241419a9d6f37b4d5e5c136260237c590f1324e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33715"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33714"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33711"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33713"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33712"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f60ecd9052d6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-911602": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2d6a064845cd",
	                        "addons-911602"
	                    ],
	                    "NetworkID": "7b25bd7fd78dca0443a74a7d2e5b3d9355f7f3ec76a5d00aa435b8e629d34ef8",
	                    "EndpointID": "39a33f374d0364b58ab47cee1dd7f97dab9a1d95bfecd104ff33cab959400820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-911602 -n addons-911602
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-911602 logs -n 25: (1.754380367s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-062474   | jenkins | v1.30.1 | 17 Jul 23 20:14 UTC |                     |
	|         | -p download-only-062474        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-062474   | jenkins | v1.30.1 | 17 Jul 23 20:14 UTC |                     |
	|         | -p download-only-062474        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:15 UTC |
	| delete  | -p download-only-062474        | download-only-062474   | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:15 UTC |
	| delete  | -p download-only-062474        | download-only-062474   | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:15 UTC |
	| start   | --download-only -p             | download-docker-219942 | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC |                     |
	|         | download-docker-219942         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p download-docker-219942      | download-docker-219942 | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:15 UTC |
	| start   | --download-only -p             | binary-mirror-408945   | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC |                     |
	|         | binary-mirror-408945           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34645         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-408945        | binary-mirror-408945   | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:15 UTC |
	| start   | -p addons-911602               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:17 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=containerd |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|         | addons-911602                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|         | -p addons-911602               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-911602 ip               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	| addons  | addons-911602 addons disable   | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-911602 addons           | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|         | addons-911602                  |                        |         |         |                     |                     |
	| ssh     | addons-911602 ssh curl -s      | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-911602 ip               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	| addons  | addons-911602 addons           | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-911602 addons disable   | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-911602 addons disable   | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons  | addons-911602 addons           | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:15:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:15:03.818614  904497 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:15:03.818788  904497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:15:03.818799  904497 out.go:309] Setting ErrFile to fd 2...
	I0717 20:15:03.818805  904497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:15:03.819409  904497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:15:03.819879  904497 out.go:303] Setting JSON to false
	I0717 20:15:03.820887  904497 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14251,"bootTime":1689610653,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:15:03.820960  904497 start.go:138] virtualization:  
	I0717 20:15:03.823917  904497 out.go:177] * [addons-911602] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:15:03.826503  904497 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:15:03.826638  904497 notify.go:220] Checking for updates...
	I0717 20:15:03.828593  904497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:15:03.830917  904497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:15:03.833557  904497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:15:03.835761  904497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:15:03.837816  904497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:15:03.840045  904497 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:15:03.864609  904497 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:15:03.864714  904497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:15:03.943691  904497 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 20:15:03.933542801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:15:03.943804  904497 docker.go:294] overlay module found
	I0717 20:15:03.946115  904497 out.go:177] * Using the docker driver based on user configuration
	I0717 20:15:03.947784  904497 start.go:298] selected driver: docker
	I0717 20:15:03.947806  904497 start.go:880] validating driver "docker" against <nil>
	I0717 20:15:03.947820  904497 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:15:03.948477  904497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:15:04.021313  904497 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 20:15:04.010736855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:15:04.021483  904497 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:15:04.021719  904497 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 20:15:04.023708  904497 out.go:177] * Using Docker driver with root privileges
	I0717 20:15:04.025393  904497 cni.go:84] Creating CNI manager for ""
	I0717 20:15:04.025422  904497 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:15:04.025436  904497 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 20:15:04.025448  904497 start_flags.go:319] config:
	{Name:addons-911602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-911602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:15:04.027857  904497 out.go:177] * Starting control plane node addons-911602 in cluster addons-911602
	I0717 20:15:04.029978  904497 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 20:15:04.031653  904497 out.go:177] * Pulling base image ...
	I0717 20:15:04.033288  904497 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:15:04.033350  904497 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	I0717 20:15:04.033363  904497 cache.go:57] Caching tarball of preloaded images
	I0717 20:15:04.033452  904497 preload.go:174] Found /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 20:15:04.033467  904497 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0717 20:15:04.033823  904497 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/config.json ...
	I0717 20:15:04.033862  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/config.json: {Name:mkfce664cb2ce61912bdb61bd25dcddce6100c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:04.034035  904497 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 20:15:04.051314  904497 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 20:15:04.051429  904497 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 20:15:04.051460  904497 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 20:15:04.051467  904497 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 20:15:04.051474  904497 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 20:15:04.051480  904497 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0717 20:15:20.232878  904497 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0717 20:15:20.232931  904497 cache.go:195] Successfully downloaded all kic artifacts
	I0717 20:15:20.232981  904497 start.go:365] acquiring machines lock for addons-911602: {Name:mk7d0cfa75e86f3e6269696e4048ef0d29c14a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:15:20.233720  904497 start.go:369] acquired machines lock for "addons-911602" in 712.282µs
	I0717 20:15:20.233761  904497 start.go:93] Provisioning new machine with config: &{Name:addons-911602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-911602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 20:15:20.233876  904497 start.go:125] createHost starting for "" (driver="docker")
	I0717 20:15:20.236094  904497 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 20:15:20.236352  904497 start.go:159] libmachine.API.Create for "addons-911602" (driver="docker")
	I0717 20:15:20.236388  904497 client.go:168] LocalClient.Create starting
	I0717 20:15:20.236497  904497 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem
	I0717 20:15:21.256037  904497 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem
	I0717 20:15:21.434838  904497 cli_runner.go:164] Run: docker network inspect addons-911602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 20:15:21.455934  904497 cli_runner.go:211] docker network inspect addons-911602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 20:15:21.456027  904497 network_create.go:281] running [docker network inspect addons-911602] to gather additional debugging logs...
	I0717 20:15:21.456048  904497 cli_runner.go:164] Run: docker network inspect addons-911602
	W0717 20:15:21.474253  904497 cli_runner.go:211] docker network inspect addons-911602 returned with exit code 1
	I0717 20:15:21.474297  904497 network_create.go:284] error running [docker network inspect addons-911602]: docker network inspect addons-911602: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-911602 not found
	I0717 20:15:21.474311  904497 network_create.go:286] output of [docker network inspect addons-911602]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-911602 not found
	
	** /stderr **
	I0717 20:15:21.474379  904497 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:15:21.492826  904497 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011d9100}
	I0717 20:15:21.492891  904497 network_create.go:123] attempt to create docker network addons-911602 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 20:15:21.492953  904497 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-911602 addons-911602
	I0717 20:15:21.568520  904497 network_create.go:107] docker network addons-911602 192.168.49.0/24 created
	I0717 20:15:21.568558  904497 kic.go:117] calculated static IP "192.168.49.2" for the "addons-911602" container
	I0717 20:15:21.568634  904497 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 20:15:21.585955  904497 cli_runner.go:164] Run: docker volume create addons-911602 --label name.minikube.sigs.k8s.io=addons-911602 --label created_by.minikube.sigs.k8s.io=true
	I0717 20:15:21.604942  904497 oci.go:103] Successfully created a docker volume addons-911602
	I0717 20:15:21.605051  904497 cli_runner.go:164] Run: docker run --rm --name addons-911602-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-911602 --entrypoint /usr/bin/test -v addons-911602:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 20:15:23.422026  904497 cli_runner.go:217] Completed: docker run --rm --name addons-911602-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-911602 --entrypoint /usr/bin/test -v addons-911602:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.816933185s)
	I0717 20:15:23.422054  904497 oci.go:107] Successfully prepared a docker volume addons-911602
	I0717 20:15:23.422072  904497 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:15:23.422090  904497 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 20:15:23.422183  904497 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-911602:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 20:15:27.579824  904497 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-911602:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.157590747s)
	I0717 20:15:27.579860  904497 kic.go:199] duration metric: took 4.157765 seconds to extract preloaded images to volume
	W0717 20:15:27.580009  904497 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 20:15:27.580121  904497 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 20:15:27.644052  904497 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-911602 --name addons-911602 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-911602 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-911602 --network addons-911602 --ip 192.168.49.2 --volume addons-911602:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 20:15:27.979750  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Running}}
	I0717 20:15:28.012750  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:15:28.041138  904497 cli_runner.go:164] Run: docker exec addons-911602 stat /var/lib/dpkg/alternatives/iptables
	I0717 20:15:28.103643  904497 oci.go:144] the created container "addons-911602" has a running status.
	I0717 20:15:28.103673  904497 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa...
	I0717 20:15:29.074068  904497 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 20:15:29.098554  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:15:29.144341  904497 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 20:15:29.144368  904497 kic_runner.go:114] Args: [docker exec --privileged addons-911602 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 20:15:29.246587  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:15:29.268628  904497 machine.go:88] provisioning docker machine ...
	I0717 20:15:29.268689  904497 ubuntu.go:169] provisioning hostname "addons-911602"
	I0717 20:15:29.268760  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:15:29.292074  904497 main.go:141] libmachine: Using SSH client type: native
	I0717 20:15:29.292534  904497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33715 <nil> <nil>}
	I0717 20:15:29.292547  904497 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-911602 && echo "addons-911602" | sudo tee /etc/hostname
	I0717 20:15:29.450921  904497 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-911602
	
	I0717 20:15:29.451006  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:15:29.469201  904497 main.go:141] libmachine: Using SSH client type: native
	I0717 20:15:29.469645  904497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33715 <nil> <nil>}
	I0717 20:15:29.469670  904497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-911602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-911602/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-911602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:15:29.602955  904497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:15:29.602982  904497 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:15:29.603025  904497 ubuntu.go:177] setting up certificates
	I0717 20:15:29.603035  904497 provision.go:83] configureAuth start
	I0717 20:15:29.603132  904497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-911602
	I0717 20:15:29.622988  904497 provision.go:138] copyHostCerts
	I0717 20:15:29.623072  904497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:15:29.623197  904497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:15:29.623304  904497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:15:29.623365  904497 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.addons-911602 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-911602]
	I0717 20:15:30.071574  904497 provision.go:172] copyRemoteCerts
	I0717 20:15:30.071673  904497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:15:30.071724  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:15:30.100687  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:15:30.214665  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:15:30.251995  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 20:15:30.285517  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 20:15:30.319449  904497 provision.go:86] duration metric: configureAuth took 716.390081ms
	I0717 20:15:30.319521  904497 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:15:30.319745  904497 config.go:182] Loaded profile config "addons-911602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:15:30.319780  904497 machine.go:91] provisioned docker machine in 1.051128133s
	I0717 20:15:30.319794  904497 client.go:171] LocalClient.Create took 10.08340012s
	I0717 20:15:30.319817  904497 start.go:167] duration metric: libmachine.API.Create for "addons-911602" took 10.083465491s
	I0717 20:15:30.319827  904497 start.go:300] post-start starting for "addons-911602" (driver="docker")
	I0717 20:15:30.319836  904497 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:15:30.319925  904497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:15:30.319970  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:15:30.344001  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:15:30.440590  904497 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:15:30.445136  904497 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:15:30.445175  904497 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:15:30.445190  904497 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:15:30.445196  904497 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 20:15:30.445206  904497 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:15:30.445290  904497 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:15:30.445322  904497 start.go:303] post-start completed in 125.48933ms
	I0717 20:15:30.445687  904497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-911602
	I0717 20:15:30.464487  904497 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/config.json ...
	I0717 20:15:30.464779  904497 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:15:30.464831  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:15:30.483471  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:15:30.579498  904497 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:15:30.585788  904497 start.go:128] duration metric: createHost completed in 10.35189791s
	I0717 20:15:30.585811  904497 start.go:83] releasing machines lock for "addons-911602", held for 10.35207259s
	I0717 20:15:30.585893  904497 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-911602
	I0717 20:15:30.604519  904497 ssh_runner.go:195] Run: cat /version.json
	I0717 20:15:30.604539  904497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:15:30.604574  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:15:30.604600  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:15:30.626332  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:15:30.640951  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	W0717 20:15:30.854159  904497 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 20:15:30.854272  904497 ssh_runner.go:195] Run: systemctl --version
	I0717 20:15:30.860783  904497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 20:15:30.866789  904497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 20:15:30.899449  904497 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 20:15:30.899541  904497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:15:30.932949  904497 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 20:15:30.932975  904497 start.go:469] detecting cgroup driver to use...
	I0717 20:15:30.933007  904497 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 20:15:30.933068  904497 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 20:15:30.948312  904497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 20:15:30.962867  904497 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:15:30.962980  904497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:15:30.979635  904497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:15:30.997067  904497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 20:15:31.094035  904497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:15:31.194648  904497 docker.go:212] disabling docker service ...
	I0717 20:15:31.194761  904497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:15:31.221447  904497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:15:31.238643  904497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:15:31.344200  904497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:15:31.440980  904497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:15:31.455353  904497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:15:31.475814  904497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 20:15:31.488365  904497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 20:15:31.501606  904497 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 20:15:31.501700  904497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 20:15:31.514992  904497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:15:31.527281  904497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 20:15:31.539567  904497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:15:31.551817  904497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:15:31.563792  904497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 20:15:31.577588  904497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:15:31.588387  904497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:15:31.598966  904497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:15:31.697624  904497 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:15:31.796020  904497 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 20:15:31.796109  904497 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 20:15:31.801314  904497 start.go:537] Will wait 60s for crictl version
	I0717 20:15:31.801374  904497 ssh_runner.go:195] Run: which crictl
	I0717 20:15:31.806612  904497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:15:31.859770  904497 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0717 20:15:31.859853  904497 ssh_runner.go:195] Run: containerd --version
	I0717 20:15:31.890474  904497 ssh_runner.go:195] Run: containerd --version
	I0717 20:15:31.924371  904497 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0717 20:15:31.926388  904497 cli_runner.go:164] Run: docker network inspect addons-911602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:15:31.944648  904497 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 20:15:31.949691  904497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:15:31.964053  904497 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:15:31.964123  904497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:15:32.010186  904497 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 20:15:32.010210  904497 containerd.go:518] Images already preloaded, skipping extraction
	I0717 20:15:32.010274  904497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:15:32.061202  904497 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 20:15:32.061224  904497 cache_images.go:84] Images are preloaded, skipping loading
	I0717 20:15:32.061283  904497 ssh_runner.go:195] Run: sudo crictl info
	I0717 20:15:32.106133  904497 cni.go:84] Creating CNI manager for ""
	I0717 20:15:32.106158  904497 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:15:32.106170  904497 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 20:15:32.106188  904497 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-911602 NodeName:addons-911602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 20:15:32.106319  904497 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-911602"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 20:15:32.106391  904497 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-911602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-911602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 20:15:32.106458  904497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 20:15:32.117739  904497 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 20:15:32.117808  904497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 20:15:32.128637  904497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0717 20:15:32.150647  904497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 20:15:32.173349  904497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0717 20:15:32.195249  904497 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 20:15:32.200083  904497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:15:32.214351  904497 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602 for IP: 192.168.49.2
	I0717 20:15:32.214391  904497 certs.go:190] acquiring lock for shared ca certs: {Name:mk081da4b0c80820af8357079096999320bef2ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:32.214557  904497 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key
	I0717 20:15:33.464641  904497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt ...
	I0717 20:15:33.464681  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt: {Name:mk8cb6729108a6101513adca03ccfb8fc6ea2464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:33.464940  904497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key ...
	I0717 20:15:33.464970  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key: {Name:mk912a788f904356e6d4e3be52142fac192462b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:33.465091  904497 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key
	I0717 20:15:34.126817  904497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.crt ...
	I0717 20:15:34.126850  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.crt: {Name:mka306165af8246882df3a82ccfc75e978daf7f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:34.127075  904497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key ...
	I0717 20:15:34.127090  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key: {Name:mk74ee76ad24cb2b851e69dcecbca599d3416fc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:34.127209  904497 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.key
	I0717 20:15:34.127247  904497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt with IP's: []
	I0717 20:15:35.342740  904497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt ...
	I0717 20:15:35.342775  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: {Name:mkc3a4c886ec698764a244dbba6cf1fa1a54fad3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:35.343001  904497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.key ...
	I0717 20:15:35.343013  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.key: {Name:mkf317f904f357dfb140ff3163dc32f4c3665f67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:35.343114  904497 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.key.dd3b5fb2
	I0717 20:15:35.343134  904497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 20:15:36.438588  904497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.crt.dd3b5fb2 ...
	I0717 20:15:36.438626  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.crt.dd3b5fb2: {Name:mk8b548880645fa175ec9cf7c1cd9054ec303391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:36.438845  904497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.key.dd3b5fb2 ...
	I0717 20:15:36.438860  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.key.dd3b5fb2: {Name:mk00482ec6129f194e143b2936b7a3c85a5eac13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:36.438963  904497 certs.go:337] copying /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.crt
	I0717 20:15:36.439044  904497 certs.go:341] copying /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.key
	I0717 20:15:36.439108  904497 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.key
	I0717 20:15:36.439137  904497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.crt with IP's: []
	I0717 20:15:37.171073  904497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.crt ...
	I0717 20:15:37.171108  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.crt: {Name:mk6828c3eab5f0a8f80cbb3fcc990209146c2917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:37.171987  904497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.key ...
	I0717 20:15:37.172008  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.key: {Name:mk16569067a8f279ba508a408971c7b6ae77f37d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:15:37.172249  904497 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 20:15:37.172316  904497 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem (1078 bytes)
	I0717 20:15:37.172350  904497 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem (1123 bytes)
	I0717 20:15:37.172376  904497 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem (1675 bytes)
	I0717 20:15:37.173164  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 20:15:37.202948  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 20:15:37.234051  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 20:15:37.264591  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 20:15:37.294733  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 20:15:37.325357  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 20:15:37.354961  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 20:15:37.384361  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 20:15:37.413679  904497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 20:15:37.443345  904497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 20:15:37.465223  904497 ssh_runner.go:195] Run: openssl version
	I0717 20:15:37.472355  904497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 20:15:37.485128  904497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:15:37.490752  904497 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 20:15 /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:15:37.490820  904497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:15:37.500063  904497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 20:15:37.512204  904497 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 20:15:37.516823  904497 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 20:15:37.516893  904497 kubeadm.go:404] StartCluster: {Name:addons-911602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-911602 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:15:37.517026  904497 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 20:15:37.517111  904497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:15:37.561512  904497 cri.go:89] found id: ""
	I0717 20:15:37.561634  904497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 20:15:37.572764  904497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:15:37.583591  904497 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 20:15:37.583683  904497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:15:37.594546  904497 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:15:37.594632  904497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 20:15:37.647941  904497 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:15:37.648058  904497 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:15:37.692057  904497 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 20:15:37.692166  904497 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 20:15:37.692230  904497 kubeadm.go:322] OS: Linux
	I0717 20:15:37.692296  904497 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 20:15:37.692370  904497 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 20:15:37.692437  904497 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 20:15:37.692511  904497 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 20:15:37.692576  904497 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 20:15:37.692650  904497 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 20:15:37.692713  904497 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 20:15:37.692786  904497 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 20:15:37.692876  904497 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 20:15:37.773726  904497 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:15:37.773838  904497 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:15:37.773933  904497 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:15:38.060882  904497 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:15:38.064354  904497 out.go:204]   - Generating certificates and keys ...
	I0717 20:15:38.064507  904497 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:15:38.064580  904497 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:15:38.878356  904497 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 20:15:39.230812  904497 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 20:15:39.588096  904497 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 20:15:40.095604  904497 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 20:15:40.470911  904497 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 20:15:40.471320  904497 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-911602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 20:15:40.643484  904497 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 20:15:40.643884  904497 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-911602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 20:15:40.861347  904497 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 20:15:41.833224  904497 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 20:15:42.447693  904497 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 20:15:42.447783  904497 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:15:42.867472  904497 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:15:43.057795  904497 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:15:43.493444  904497 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:15:43.952861  904497 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:15:43.969326  904497 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:15:43.970449  904497 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:15:43.970683  904497 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:15:44.086119  904497 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:15:44.088549  904497 out.go:204]   - Booting up control plane ...
	I0717 20:15:44.088658  904497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:15:44.092127  904497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:15:44.095725  904497 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:15:44.097596  904497 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:15:44.100801  904497 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:15:51.603317  904497 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502420 seconds
	I0717 20:15:51.603433  904497 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:15:51.620173  904497 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:15:52.147112  904497 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:15:52.147297  904497 kubeadm.go:322] [mark-control-plane] Marking the node addons-911602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:15:52.659819  904497 kubeadm.go:322] [bootstrap-token] Using token: 8jh59d.5frqx4nxkkqdu2xr
	I0717 20:15:52.662123  904497 out.go:204]   - Configuring RBAC rules ...
	I0717 20:15:52.662253  904497 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:15:52.674906  904497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:15:52.683741  904497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:15:52.688104  904497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:15:52.692207  904497 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:15:52.696597  904497 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:15:52.710465  904497 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:15:52.936200  904497 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:15:53.083918  904497 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:15:53.087430  904497 kubeadm.go:322] 
	I0717 20:15:53.087502  904497 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:15:53.087508  904497 kubeadm.go:322] 
	I0717 20:15:53.087580  904497 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:15:53.087584  904497 kubeadm.go:322] 
	I0717 20:15:53.087608  904497 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:15:53.087664  904497 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:15:53.087711  904497 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:15:53.087716  904497 kubeadm.go:322] 
	I0717 20:15:53.087767  904497 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:15:53.087772  904497 kubeadm.go:322] 
	I0717 20:15:53.087816  904497 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:15:53.087821  904497 kubeadm.go:322] 
	I0717 20:15:53.087870  904497 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:15:53.087943  904497 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:15:53.088008  904497 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:15:53.088012  904497 kubeadm.go:322] 
	I0717 20:15:53.088090  904497 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:15:53.088162  904497 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:15:53.088167  904497 kubeadm.go:322] 
	I0717 20:15:53.088513  904497 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8jh59d.5frqx4nxkkqdu2xr \
	I0717 20:15:53.088616  904497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c9abf7e3fe1433b2ee101b0228035f8f2ed73026247b5c044e3b341ae3a2cc41 \
	I0717 20:15:53.088636  904497 kubeadm.go:322] 	--control-plane 
	I0717 20:15:53.088643  904497 kubeadm.go:322] 
	I0717 20:15:53.088722  904497 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:15:53.088728  904497 kubeadm.go:322] 
	I0717 20:15:53.088804  904497 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8jh59d.5frqx4nxkkqdu2xr \
	I0717 20:15:53.088931  904497 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c9abf7e3fe1433b2ee101b0228035f8f2ed73026247b5c044e3b341ae3a2cc41 
	I0717 20:15:53.092209  904497 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 20:15:53.092320  904497 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:15:53.092334  904497 cni.go:84] Creating CNI manager for ""
	I0717 20:15:53.092344  904497 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:15:53.094666  904497 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 20:15:53.096484  904497 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 20:15:53.104070  904497 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 20:15:53.104095  904497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 20:15:53.143993  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 20:15:54.105718  904497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:15:54.105876  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:54.105950  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=addons-911602 minikube.k8s.io/updated_at=2023_07_17T20_15_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:54.326040  904497 ops.go:34] apiserver oom_adj: -16
	I0717 20:15:54.326127  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:54.934967  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:55.435363  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:55.934437  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:56.435216  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:56.934363  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:57.434952  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:57.935227  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:58.434998  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:58.934410  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:59.434629  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:15:59.935355  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:00.434465  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:00.935112  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:01.434930  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:01.934380  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:02.434987  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:02.934425  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:03.434416  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:03.934989  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:04.435025  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:04.935361  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:05.434340  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:05.935242  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:06.434513  904497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:16:06.546528  904497 kubeadm.go:1081] duration metric: took 12.440715786s to wait for elevateKubeSystemPrivileges.
	I0717 20:16:06.546552  904497 kubeadm.go:406] StartCluster complete in 29.029664233s
	I0717 20:16:06.546568  904497 settings.go:142] acquiring lock: {Name:mk07e0d8498fadd24504785e1ba3db0cfccaf251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:16:06.547320  904497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:16:06.547717  904497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/kubeconfig: {Name:mk933d9b210c77bbf248211a6ac799f4302f2fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:16:06.548576  904497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:16:06.548884  904497 config.go:182] Loaded profile config "addons-911602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:16:06.548992  904497 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 20:16:06.549068  904497 addons.go:69] Setting volumesnapshots=true in profile "addons-911602"
	I0717 20:16:06.549097  904497 addons.go:231] Setting addon volumesnapshots=true in "addons-911602"
	I0717 20:16:06.549142  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.549584  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.550726  904497 addons.go:69] Setting cloud-spanner=true in profile "addons-911602"
	I0717 20:16:06.550752  904497 addons.go:231] Setting addon cloud-spanner=true in "addons-911602"
	I0717 20:16:06.550803  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.551228  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.551923  904497 addons.go:69] Setting inspektor-gadget=true in profile "addons-911602"
	I0717 20:16:06.551968  904497 addons.go:231] Setting addon inspektor-gadget=true in "addons-911602"
	I0717 20:16:06.552040  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.552540  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.552664  904497 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-911602"
	I0717 20:16:06.552727  904497 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-911602"
	I0717 20:16:06.552795  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.553289  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.553406  904497 addons.go:69] Setting default-storageclass=true in profile "addons-911602"
	I0717 20:16:06.553439  904497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-911602"
	I0717 20:16:06.553743  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.553853  904497 addons.go:69] Setting gcp-auth=true in profile "addons-911602"
	I0717 20:16:06.553886  904497 mustload.go:65] Loading cluster: addons-911602
	I0717 20:16:06.554090  904497 config.go:182] Loaded profile config "addons-911602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:16:06.554894  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.555066  904497 addons.go:69] Setting ingress=true in profile "addons-911602"
	I0717 20:16:06.555126  904497 addons.go:231] Setting addon ingress=true in "addons-911602"
	I0717 20:16:06.555215  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.556037  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.556163  904497 addons.go:69] Setting ingress-dns=true in profile "addons-911602"
	I0717 20:16:06.556195  904497 addons.go:231] Setting addon ingress-dns=true in "addons-911602"
	I0717 20:16:06.556268  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.556689  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.556803  904497 addons.go:69] Setting storage-provisioner=true in profile "addons-911602"
	I0717 20:16:06.556838  904497 addons.go:231] Setting addon storage-provisioner=true in "addons-911602"
	I0717 20:16:06.564108  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.564794  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.585807  904497 addons.go:69] Setting metrics-server=true in profile "addons-911602"
	I0717 20:16:06.585897  904497 addons.go:231] Setting addon metrics-server=true in "addons-911602"
	I0717 20:16:06.585984  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.589438  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.607167  904497 addons.go:69] Setting registry=true in profile "addons-911602"
	I0717 20:16:06.607244  904497 addons.go:231] Setting addon registry=true in "addons-911602"
	I0717 20:16:06.607323  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.607877  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.615799  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 20:16:06.618268  904497 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 20:16:06.618333  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 20:16:06.618432  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.655064  904497 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 20:16:06.676874  904497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:16:06.684933  904497 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:16:06.685000  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:16:06.685108  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.685360  904497 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 20:16:06.685395  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 20:16:06.685443  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.758996  904497 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 20:16:06.762221  904497 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 20:16:06.762246  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 20:16:06.762333  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.766521  904497 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 20:16:06.768539  904497 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 20:16:06.768563  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 20:16:06.768633  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.771939  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 20:16:06.775449  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 20:16:06.777543  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 20:16:06.779797  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 20:16:06.781973  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 20:16:06.783974  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 20:16:06.785983  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 20:16:06.788048  904497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 20:16:06.789984  904497 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 20:16:06.790004  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 20:16:06.790071  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.865882  904497 addons.go:231] Setting addon default-storageclass=true in "addons-911602"
	I0717 20:16:06.865924  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.866366  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:06.868734  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:06.885214  904497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:16:06.904092  904497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 20:16:06.906286  904497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 20:16:06.909558  904497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 20:16:06.912057  904497 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 20:16:06.912081  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 20:16:06.912147  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.914246  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:06.916661  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:06.900905  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:06.961250  904497 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 20:16:06.963491  904497 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 20:16:06.963512  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 20:16:06.963579  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:06.967338  904497 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 20:16:06.969203  904497 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 20:16:06.971636  904497 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 20:16:06.971656  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 20:16:06.971725  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:07.038415  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:07.041286  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:07.042176  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:07.062801  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:07.117048  904497 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:16:07.117072  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:16:07.117149  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:07.126111  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:07.138162  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:07.159467  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:07.189239  904497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-911602" context rescaled to 1 replicas
	I0717 20:16:07.189283  904497 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 20:16:07.192259  904497 out.go:177] * Verifying Kubernetes components...
	I0717 20:16:07.195242  904497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:16:07.453786  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:16:07.593254  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 20:16:07.715251  904497 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 20:16:07.715277  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 20:16:07.745180  904497 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 20:16:07.745207  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 20:16:07.756569  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:16:07.765058  904497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 20:16:07.765092  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 20:16:07.800346  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 20:16:07.859669  904497 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 20:16:07.859693  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 20:16:07.938800  904497 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 20:16:07.938826  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 20:16:07.989947  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 20:16:07.992815  904497 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 20:16:07.992840  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 20:16:08.027207  904497 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 20:16:08.027233  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 20:16:08.071977  904497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 20:16:08.072004  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 20:16:08.091041  904497 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 20:16:08.091067  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 20:16:08.171778  904497 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 20:16:08.171809  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 20:16:08.243637  904497 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 20:16:08.243662  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 20:16:08.293257  904497 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:16:08.293294  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 20:16:08.293553  904497 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 20:16:08.293569  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 20:16:08.351483  904497 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 20:16:08.351515  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 20:16:08.425378  904497 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 20:16:08.425404  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 20:16:08.493113  904497 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 20:16:08.493139  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 20:16:08.534417  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 20:16:08.539899  904497 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 20:16:08.539924  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 20:16:08.615970  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 20:16:08.661814  904497 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 20:16:08.661836  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 20:16:08.686597  904497 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 20:16:08.686629  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 20:16:08.722560  904497 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 20:16:08.722592  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 20:16:08.795837  904497 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.600558443s)
	I0717 20:16:08.795875  904497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.910638138s)
	I0717 20:16:08.795957  904497 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 20:16:08.796902  904497 node_ready.go:35] waiting up to 6m0s for node "addons-911602" to be "Ready" ...
	I0717 20:16:08.800606  904497 node_ready.go:49] node "addons-911602" has status "Ready":"True"
	I0717 20:16:08.800642  904497 node_ready.go:38] duration metric: took 3.707595ms waiting for node "addons-911602" to be "Ready" ...
	I0717 20:16:08.800654  904497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:16:08.809381  904497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2ccm7" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:08.861607  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 20:16:08.866304  904497 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 20:16:08.866365  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 20:16:08.952698  904497 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 20:16:08.952718  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 20:16:09.118651  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 20:16:09.126844  904497 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 20:16:09.126910  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 20:16:09.320926  904497 pod_ready.go:97] error getting pod "coredns-5d78c9869d-2ccm7" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-2ccm7" not found
	I0717 20:16:09.320990  904497 pod_ready.go:81] duration metric: took 511.533362ms waiting for pod "coredns-5d78c9869d-2ccm7" in "kube-system" namespace to be "Ready" ...
	E0717 20:16:09.321018  904497 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5d78c9869d-2ccm7" in "kube-system" namespace (skipping!): pods "coredns-5d78c9869d-2ccm7" not found
	I0717 20:16:09.321038  904497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:09.321880  904497 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 20:16:09.321930  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 20:16:09.474249  904497 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 20:16:09.474321  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 20:16:09.635573  904497 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 20:16:09.635644  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 20:16:09.821582  904497 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 20:16:09.821653  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 20:16:09.854911  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 20:16:10.686951  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.233132731s)
	I0717 20:16:11.396557  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:13.392167  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.798830854s)
	I0717 20:16:13.392622  904497 addons.go:467] Verifying addon ingress=true in "addons-911602"
	I0717 20:16:13.398886  904497 out.go:177] * Verifying ingress addon...
	I0717 20:16:13.392276  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.635678801s)
	I0717 20:16:13.392378  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.40240594s)
	I0717 20:16:13.392432  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.857986788s)
	I0717 20:16:13.392456  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.776454145s)
	I0717 20:16:13.392534  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.530900473s)
	I0717 20:16:13.392583  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.27390604s)
	I0717 20:16:13.392598  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.5919673s)
	I0717 20:16:13.399141  904497 addons.go:467] Verifying addon metrics-server=true in "addons-911602"
	I0717 20:16:13.399185  904497 addons.go:467] Verifying addon registry=true in "addons-911602"
	W0717 20:16:13.399263  904497 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 20:16:13.403275  904497 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 20:16:13.405592  904497 out.go:177] * Verifying registry addon...
	I0717 20:16:13.408145  904497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 20:16:13.405706  904497 retry.go:31] will retry after 352.773909ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 20:16:13.409547  904497 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 20:16:13.409625  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:13.412960  904497 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 20:16:13.413021  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:13.725776  904497 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 20:16:13.725914  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:13.747378  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:13.763291  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 20:16:13.841501  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:13.920836  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:13.921786  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:14.119354  904497 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 20:16:14.235115  904497 addons.go:231] Setting addon gcp-auth=true in "addons-911602"
	I0717 20:16:14.235214  904497 host.go:66] Checking if "addons-911602" exists ...
	I0717 20:16:14.235701  904497 cli_runner.go:164] Run: docker container inspect addons-911602 --format={{.State.Status}}
	I0717 20:16:14.271894  904497 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 20:16:14.271951  904497 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-911602
	I0717 20:16:14.303511  904497 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33715 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/addons-911602/id_rsa Username:docker}
	I0717 20:16:14.439081  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:14.444191  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:14.467272  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.612269265s)
	I0717 20:16:14.467359  904497 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-911602"
	I0717 20:16:14.470141  904497 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 20:16:14.472825  904497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 20:16:14.502089  904497 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 20:16:14.502129  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:14.919821  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:14.927994  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:15.015314  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:15.415816  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:15.421130  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:15.516321  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:15.650053  904497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.886710476s)
	I0717 20:16:15.650129  904497 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.378217162s)
	I0717 20:16:15.657173  904497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 20:16:15.659408  904497 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 20:16:15.661285  904497 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 20:16:15.661310  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 20:16:15.685884  904497 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 20:16:15.685907  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 20:16:15.711987  904497 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 20:16:15.712012  904497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 20:16:15.737239  904497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 20:16:15.914855  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:15.924023  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:16.011425  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:16.337689  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:16.427077  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:16.428132  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:16.561540  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:16.665431  904497 addons.go:467] Verifying addon gcp-auth=true in "addons-911602"
	I0717 20:16:16.667486  904497 out.go:177] * Verifying gcp-auth addon...
	I0717 20:16:16.670133  904497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 20:16:16.695716  904497 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 20:16:16.695741  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:16.927992  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:16.929049  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:17.009164  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:17.200373  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:17.423140  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:17.424186  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:17.508444  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:17.700188  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:17.915205  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:17.918420  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:18.011513  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:18.201021  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:18.338025  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:18.414468  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:18.417847  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:18.518299  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:18.700062  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:18.914885  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:18.919821  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:19.012409  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:19.200142  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:19.417082  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:19.420760  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:19.511040  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:19.700383  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:19.917456  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:19.920121  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:20.010962  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:20.202048  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:20.414655  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:20.419362  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:20.510583  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:20.700699  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:20.841676  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:20.914945  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:20.918446  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:21.015073  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:21.200932  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:21.413924  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:21.419088  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:21.508096  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:21.699999  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:21.914856  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:21.919440  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:22.010280  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:22.200432  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:22.416083  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:22.418613  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:22.509028  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:22.700254  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:22.916251  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:22.919847  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:23.018222  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:23.202473  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:23.337432  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:23.416844  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:23.420274  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:23.508891  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:23.700017  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:23.914535  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:23.919408  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:24.011687  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:24.200241  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:24.416021  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:24.421735  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:24.508653  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:24.699785  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:24.915583  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:24.919601  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:25.010871  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:25.200658  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:25.338648  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:25.417203  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:25.421238  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:25.509211  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:25.699994  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:25.922789  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:25.923832  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:26.009511  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:26.200546  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:26.415624  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:26.421537  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:26.508787  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:26.699788  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:26.915407  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:26.918328  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:27.014202  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:27.199720  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:27.415363  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:27.418997  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:27.508329  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:27.699949  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:27.836605  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:27.914065  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:27.918303  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:28.012764  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:28.199873  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:28.415152  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:28.418470  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:28.508129  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:28.700063  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:28.914913  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:28.919141  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:29.009389  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:29.199947  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:29.415243  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:29.418446  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:29.507920  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:29.699654  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:29.837536  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:29.915605  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:29.918383  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:30.034397  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:30.201302  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:30.415568  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:30.418531  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:30.508960  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:30.699465  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:30.915335  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:30.918501  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:31.010687  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:31.200368  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:31.416002  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:31.421384  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:31.508478  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:31.699981  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:31.837620  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:31.914182  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:31.919142  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:32.012125  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:32.200382  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:32.416631  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:32.419266  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:32.508230  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:32.700760  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:32.917309  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:32.920360  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:33.009422  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:33.202809  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:33.414691  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:33.418148  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:33.509663  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:33.699659  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:33.914747  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:33.918120  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:34.009211  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:34.201520  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:34.338393  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:34.414769  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:34.418311  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:34.508252  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:34.699632  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:34.915282  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:34.919804  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:35.015578  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:35.200105  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:35.415100  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:35.419024  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:35.507385  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:35.699894  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:35.914527  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:35.917638  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:36.016308  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:36.200143  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:36.422704  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:36.424337  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:36.508505  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:36.699515  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:36.839425  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:36.914607  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:36.917918  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:37.016106  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:37.201942  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:37.414037  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:37.417306  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:37.508706  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:37.700360  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:37.913974  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:37.918944  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:38.012847  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:38.199781  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:38.415133  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:38.419221  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:38.508011  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:38.699541  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:38.914503  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:38.918040  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:39.010369  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:39.201098  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:39.337622  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:39.414336  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:39.417744  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:39.507536  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:39.700227  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:39.914402  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:39.917551  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:40.020797  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:40.200891  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:40.422831  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:40.424491  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:40.509804  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:40.699611  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:40.915965  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:40.919353  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:41.009154  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:41.200730  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:41.338588  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:41.417114  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:41.421588  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:41.508594  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:41.700086  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:41.914783  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:41.917948  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:42.012120  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:42.203603  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:42.418475  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:42.439852  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:42.510414  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:42.702213  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:42.914793  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:42.918440  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:43.020789  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:43.199678  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:43.415115  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:43.419216  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:43.515250  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:43.700239  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:43.837615  904497 pod_ready.go:102] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"False"
	I0717 20:16:43.914986  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:43.919284  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:44.011990  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:44.200349  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:44.416431  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:44.421051  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:44.510898  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:44.699290  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:44.917132  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:44.920664  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:45.014926  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:45.205610  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:45.415676  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:45.421722  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:45.512557  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:45.701969  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:45.841144  904497 pod_ready.go:92] pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace has status "Ready":"True"
	I0717 20:16:45.841170  904497 pod_ready.go:81] duration metric: took 36.520101141s waiting for pod "coredns-5d78c9869d-5zj98" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.841189  904497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.851322  904497 pod_ready.go:92] pod "etcd-addons-911602" in "kube-system" namespace has status "Ready":"True"
	I0717 20:16:45.851348  904497 pod_ready.go:81] duration metric: took 10.150976ms waiting for pod "etcd-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.851363  904497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.859047  904497 pod_ready.go:92] pod "kube-apiserver-addons-911602" in "kube-system" namespace has status "Ready":"True"
	I0717 20:16:45.859079  904497 pod_ready.go:81] duration metric: took 7.708643ms waiting for pod "kube-apiserver-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.859091  904497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.870315  904497 pod_ready.go:92] pod "kube-controller-manager-addons-911602" in "kube-system" namespace has status "Ready":"True"
	I0717 20:16:45.870345  904497 pod_ready.go:81] duration metric: took 11.246131ms waiting for pod "kube-controller-manager-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.870358  904497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z5m9x" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.877167  904497 pod_ready.go:92] pod "kube-proxy-z5m9x" in "kube-system" namespace has status "Ready":"True"
	I0717 20:16:45.877195  904497 pod_ready.go:81] duration metric: took 6.829093ms waiting for pod "kube-proxy-z5m9x" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.877208  904497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:45.914933  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:45.919593  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:46.012181  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:46.199473  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:46.235458  904497 pod_ready.go:92] pod "kube-scheduler-addons-911602" in "kube-system" namespace has status "Ready":"True"
	I0717 20:16:46.235481  904497 pod_ready.go:81] duration metric: took 358.265282ms waiting for pod "kube-scheduler-addons-911602" in "kube-system" namespace to be "Ready" ...
	I0717 20:16:46.235491  904497 pod_ready.go:38] duration metric: took 37.434818949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:16:46.235506  904497 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:16:46.235565  904497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:16:46.251038  904497 api_server.go:72] duration metric: took 39.06171979s to wait for apiserver process to appear ...
	I0717 20:16:46.251101  904497 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:16:46.251132  904497 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 20:16:46.263371  904497 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 20:16:46.266705  904497 api_server.go:141] control plane version: v1.27.3
	I0717 20:16:46.266786  904497 api_server.go:131] duration metric: took 15.660215ms to wait for apiserver health ...
	I0717 20:16:46.266814  904497 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:16:46.416206  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:46.418778  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:46.441958  904497 system_pods.go:59] 17 kube-system pods found
	I0717 20:16:46.441996  904497 system_pods.go:61] "coredns-5d78c9869d-5zj98" [acfe2639-268e-489c-9ea0-b6266d08625d] Running
	I0717 20:16:46.442007  904497 system_pods.go:61] "csi-hostpath-attacher-0" [50f7b7a1-b446-4312-9095-d8c041c21c61] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 20:16:46.442016  904497 system_pods.go:61] "csi-hostpath-resizer-0" [083b376c-029e-4c80-843b-ac2261c7525e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 20:16:46.442026  904497 system_pods.go:61] "csi-hostpathplugin-pgvkl" [c5f32818-6a38-4344-950f-18448eaadefa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 20:16:46.442033  904497 system_pods.go:61] "etcd-addons-911602" [98382eaf-8728-49cc-a7cc-e2890c79cb9e] Running
	I0717 20:16:46.442039  904497 system_pods.go:61] "kindnet-449vf" [265e9d2c-2ae0-4d79-87f0-f38930b09359] Running
	I0717 20:16:46.442044  904497 system_pods.go:61] "kube-apiserver-addons-911602" [c8bfd5d8-2809-43a7-bd9e-951c8328470c] Running
	I0717 20:16:46.442049  904497 system_pods.go:61] "kube-controller-manager-addons-911602" [59894ab9-7d0b-45d8-abd2-f9753256b905] Running
	I0717 20:16:46.442063  904497 system_pods.go:61] "kube-ingress-dns-minikube" [e0099470-c1d7-4525-ad56-8c7f9ea866b5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 20:16:46.442073  904497 system_pods.go:61] "kube-proxy-z5m9x" [01c70e4b-84b3-4fb7-b8ac-a658e93cb550] Running
	I0717 20:16:46.442078  904497 system_pods.go:61] "kube-scheduler-addons-911602" [d8a316f6-2a5e-4bf6-afb0-c807e667e7b2] Running
	I0717 20:16:46.442089  904497 system_pods.go:61] "metrics-server-844d8db974-nmrz5" [65b74af8-a62f-4378-a2df-10dfff824c7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:16:46.442097  904497 system_pods.go:61] "registry-jpmhq" [474f72fc-16e6-4e88-a1ec-a561c39042c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 20:16:46.442107  904497 system_pods.go:61] "registry-proxy-brg6s" [44b8f712-5ae3-4448-a5c6-6deffb38d32b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 20:16:46.442113  904497 system_pods.go:61] "snapshot-controller-75bbb956b9-7pst4" [d10b8203-41f8-4743-a170-b0f69574fddb] Running
	I0717 20:16:46.442121  904497 system_pods.go:61] "snapshot-controller-75bbb956b9-rlfxm" [b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 20:16:46.442129  904497 system_pods.go:61] "storage-provisioner" [46844ed5-f758-48ad-b81d-50f305e61e95] Running
	I0717 20:16:46.442138  904497 system_pods.go:74] duration metric: took 175.306427ms to wait for pod list to return data ...
	I0717 20:16:46.442147  904497 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:16:46.515554  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:46.634999  904497 default_sa.go:45] found service account: "default"
	I0717 20:16:46.635028  904497 default_sa.go:55] duration metric: took 192.872064ms for default service account to be created ...
	I0717 20:16:46.635039  904497 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:16:46.699569  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:46.843054  904497 system_pods.go:86] 17 kube-system pods found
	I0717 20:16:46.843097  904497 system_pods.go:89] "coredns-5d78c9869d-5zj98" [acfe2639-268e-489c-9ea0-b6266d08625d] Running
	I0717 20:16:46.843111  904497 system_pods.go:89] "csi-hostpath-attacher-0" [50f7b7a1-b446-4312-9095-d8c041c21c61] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 20:16:46.843124  904497 system_pods.go:89] "csi-hostpath-resizer-0" [083b376c-029e-4c80-843b-ac2261c7525e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 20:16:46.843136  904497 system_pods.go:89] "csi-hostpathplugin-pgvkl" [c5f32818-6a38-4344-950f-18448eaadefa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 20:16:46.843145  904497 system_pods.go:89] "etcd-addons-911602" [98382eaf-8728-49cc-a7cc-e2890c79cb9e] Running
	I0717 20:16:46.843152  904497 system_pods.go:89] "kindnet-449vf" [265e9d2c-2ae0-4d79-87f0-f38930b09359] Running
	I0717 20:16:46.843157  904497 system_pods.go:89] "kube-apiserver-addons-911602" [c8bfd5d8-2809-43a7-bd9e-951c8328470c] Running
	I0717 20:16:46.843175  904497 system_pods.go:89] "kube-controller-manager-addons-911602" [59894ab9-7d0b-45d8-abd2-f9753256b905] Running
	I0717 20:16:46.843194  904497 system_pods.go:89] "kube-ingress-dns-minikube" [e0099470-c1d7-4525-ad56-8c7f9ea866b5] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 20:16:46.843207  904497 system_pods.go:89] "kube-proxy-z5m9x" [01c70e4b-84b3-4fb7-b8ac-a658e93cb550] Running
	I0717 20:16:46.843214  904497 system_pods.go:89] "kube-scheduler-addons-911602" [d8a316f6-2a5e-4bf6-afb0-c807e667e7b2] Running
	I0717 20:16:46.843226  904497 system_pods.go:89] "metrics-server-844d8db974-nmrz5" [65b74af8-a62f-4378-a2df-10dfff824c7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 20:16:46.843241  904497 system_pods.go:89] "registry-jpmhq" [474f72fc-16e6-4e88-a1ec-a561c39042c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 20:16:46.843249  904497 system_pods.go:89] "registry-proxy-brg6s" [44b8f712-5ae3-4448-a5c6-6deffb38d32b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 20:16:46.843255  904497 system_pods.go:89] "snapshot-controller-75bbb956b9-7pst4" [d10b8203-41f8-4743-a170-b0f69574fddb] Running
	I0717 20:16:46.843265  904497 system_pods.go:89] "snapshot-controller-75bbb956b9-rlfxm" [b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 20:16:46.843275  904497 system_pods.go:89] "storage-provisioner" [46844ed5-f758-48ad-b81d-50f305e61e95] Running
	I0717 20:16:46.843283  904497 system_pods.go:126] duration metric: took 208.238617ms to wait for k8s-apps to be running ...
	I0717 20:16:46.843291  904497 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:16:46.843359  904497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:16:46.863155  904497 system_svc.go:56] duration metric: took 19.855543ms WaitForService to wait for kubelet.
	I0717 20:16:46.863184  904497 kubeadm.go:581] duration metric: took 39.673869296s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:16:46.863205  904497 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:16:46.915275  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:46.919108  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:47.009092  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:47.037591  904497 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 20:16:47.037622  904497 node_conditions.go:123] node cpu capacity is 2
	I0717 20:16:47.037647  904497 node_conditions.go:105] duration metric: took 174.435829ms to run NodePressure ...
	I0717 20:16:47.037659  904497 start.go:228] waiting for startup goroutines ...
	I0717 20:16:47.201173  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:47.414911  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:47.423522  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:47.512798  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:47.700393  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:47.914620  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:47.919469  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:48.014775  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:48.200795  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:48.417725  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:48.418629  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:48.509065  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:48.699774  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:48.915768  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:48.919413  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:49.009394  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:49.199662  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:49.416247  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:49.419195  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:49.508281  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:49.700992  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:49.916838  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:49.921174  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:50.030273  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:50.202800  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:50.415409  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:50.419610  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:50.515519  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:50.699352  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:50.914769  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:50.919143  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:51.015254  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:51.200543  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:51.413841  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:51.417943  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:51.509424  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:51.700278  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:51.915812  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:51.920873  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:52.013857  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:52.201941  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:52.421793  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:52.422946  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:52.510046  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:52.719054  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:52.918582  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:52.922640  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:53.011407  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:53.201000  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:53.415817  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:53.423859  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:53.510272  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:53.700556  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:53.921205  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:53.924813  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:54.011923  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:54.207969  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:54.421735  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:54.424170  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:54.510544  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:54.701024  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:54.916074  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:54.921958  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:55.021466  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:55.202866  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:55.419715  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:55.431653  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:55.515196  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:55.700477  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:55.914624  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:55.920816  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:56.008905  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:56.209837  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:56.415619  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:56.423639  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:56.510676  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:56.701243  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:56.917113  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:56.920523  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:57.018255  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:57.199787  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:57.416182  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:57.420300  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:57.509388  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:57.702820  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:57.916191  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:57.920334  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:58.010254  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:58.199599  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:58.415137  904497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 20:16:58.420294  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:58.509696  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:58.704000  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:58.916117  904497 kapi.go:107] duration metric: took 45.512838827s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 20:16:58.921771  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:59.011635  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:59.203197  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:59.419764  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:16:59.512886  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:16:59.700774  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:16:59.918993  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:00.031845  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:00.244157  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:00.422130  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:00.509106  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:00.700655  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:00.919344  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:01.009804  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:01.201953  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:01.431536  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:01.509194  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:01.700621  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:01.919432  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:02.012661  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:02.201795  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:02.420541  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:02.508509  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:02.700541  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:02.919304  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:03.009864  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:03.206308  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:03.419133  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:03.511808  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:03.700539  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:03.919003  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:04.013146  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:04.199819  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:04.420175  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:04.508350  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:04.700558  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:04.918396  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:05.019713  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:05.201147  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:05.419133  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:05.510218  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:05.699967  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:05.920266  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:06.017963  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:06.204714  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:06.418657  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:06.508967  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:06.700748  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 20:17:06.920900  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:07.013636  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:07.200209  904497 kapi.go:107] duration metric: took 50.530073995s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 20:17:07.202500  904497 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-911602 cluster.
	I0717 20:17:07.204269  904497 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 20:17:07.206123  904497 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 20:17:07.419078  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:07.508437  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:07.919138  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:08.010204  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:08.418596  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:08.508831  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:08.918699  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:09.009067  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:09.419337  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 20:17:09.509095  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:09.918455  904497 kapi.go:107] duration metric: took 56.510306322s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 20:17:10.019557  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:10.509075  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:11.012725  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:11.508340  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:12.014978  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:12.509401  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:13.012360  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:13.508526  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:14.010488  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:14.510575  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:15.016244  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:15.508500  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:16.014182  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:16.508416  904497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 20:17:17.009234  904497 kapi.go:107] duration metric: took 1m2.536404551s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 20:17:17.011371  904497 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, ingress-dns, inspektor-gadget, cloud-spanner, metrics-server, volumesnapshots, ingress, gcp-auth, registry, csi-hostpath-driver
	I0717 20:17:17.013369  904497 addons.go:502] enable addons completed in 1m10.464366561s: enabled=[storage-provisioner default-storageclass ingress-dns inspektor-gadget cloud-spanner metrics-server volumesnapshots ingress gcp-auth registry csi-hostpath-driver]
	I0717 20:17:17.013431  904497 start.go:233] waiting for cluster config update ...
	I0717 20:17:17.013469  904497 start.go:242] writing updated cluster config ...
	I0717 20:17:17.013813  904497 ssh_runner.go:195] Run: rm -f paused
	I0717 20:17:17.078723  904497 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:17:17.080957  904497 out.go:177] * Done! kubectl is now configured to use "addons-911602" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	308cd883753fd       13753a81eccfd       8 seconds ago        Exited              hello-world-app           2                   4fe036f1076b4       hello-world-app-65bdb79f98-7fmls
	ad1cf1c3b5d82       66bf2c914bf4d       34 seconds ago       Running             nginx                     0                   28c4a40ab21b7       nginx
	8f5d2c6870c42       e52b21e9e4589       About a minute ago   Running             headlamp                  0                   39656e01e14b0       headlamp-66f6498c69-lrwdw
	b5b802bb564c3       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   2055b95ffb834       gcp-auth-58478865f7-ml2fx
	247d3a51ab5ab       8f2588812ab29       About a minute ago   Exited              patch                     0                   ea791863b8032       ingress-nginx-admission-patch-xx5hd
	fcddb68270014       97e04611ad434       About a minute ago   Running             coredns                   0                   10d0019074311       coredns-5d78c9869d-5zj98
	f5e7b98eca49c       8f2588812ab29       About a minute ago   Exited              create                    0                   20e23c963ffab       ingress-nginx-admission-create-h59lr
	872726abe13a9       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   6f9bc8eb2e1c3       storage-provisioner
	22e91f35db18d       b18bf71b941ba       2 minutes ago        Running             kindnet-cni               0                   fa27ddb0c3cef       kindnet-449vf
	88e22cf011e90       fb73e92641fd5       2 minutes ago        Running             kube-proxy                0                   3a4588739c534       kube-proxy-z5m9x
	d674a1c76821f       ab3683b584ae5       2 minutes ago        Running             kube-controller-manager   0                   4867cb08ba0f8       kube-controller-manager-addons-911602
	1b5e8a5a37cd0       24bc64e911039       2 minutes ago        Running             etcd                      0                   b5846b0dc94f4       etcd-addons-911602
	84b71f23b55a7       bcb9e554eaab6       2 minutes ago        Running             kube-scheduler            0                   fdefef953c1a2       kube-scheduler-addons-911602
	6319c4253001d       39dfb036b0986       2 minutes ago        Running             kube-apiserver            0                   bd2f9e0dc0bd2       kube-apiserver-addons-911602
	
	* 
	* ==> containerd <==
	* Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.281591643Z" level=info msg="StopContainer for \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\" returns successfully"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.282188452Z" level=info msg="StopPodSandbox for \"0f20aae78c1eca7ffe70b76f756dc9654bdebcbe9612461179b464bd15df6fc0\""
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.282323747Z" level=info msg="Container to stop \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.303347311Z" level=warning msg="cleanup warnings time=\"2023-07-17T20:18:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10076 runtime=io.containerd.runc.v2\n"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.308251884Z" level=info msg="StopContainer for \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\" returns successfully"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.309208169Z" level=info msg="StopPodSandbox for \"36f4a924d3a51821cfa3ae4afcf997fec86c2d8b5e042375ae3d1eb409866883\""
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.309398167Z" level=info msg="Container to stop \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.347452542Z" level=info msg="shim disconnected" id=0f20aae78c1eca7ffe70b76f756dc9654bdebcbe9612461179b464bd15df6fc0
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.348611240Z" level=warning msg="cleaning up after shim disconnected" id=0f20aae78c1eca7ffe70b76f756dc9654bdebcbe9612461179b464bd15df6fc0 namespace=k8s.io
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.348766825Z" level=info msg="cleaning up dead shim"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.361174795Z" level=warning msg="cleanup warnings time=\"2023-07-17T20:18:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10130 runtime=io.containerd.runc.v2\n"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.371093234Z" level=info msg="shim disconnected" id=36f4a924d3a51821cfa3ae4afcf997fec86c2d8b5e042375ae3d1eb409866883
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.371332357Z" level=warning msg="cleaning up after shim disconnected" id=36f4a924d3a51821cfa3ae4afcf997fec86c2d8b5e042375ae3d1eb409866883 namespace=k8s.io
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.371404907Z" level=info msg="cleaning up dead shim"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.384508926Z" level=warning msg="cleanup warnings time=\"2023-07-17T20:18:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10159 runtime=io.containerd.runc.v2\n"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.385732108Z" level=info msg="TearDown network for sandbox \"0f20aae78c1eca7ffe70b76f756dc9654bdebcbe9612461179b464bd15df6fc0\" successfully"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.385786467Z" level=info msg="StopPodSandbox for \"0f20aae78c1eca7ffe70b76f756dc9654bdebcbe9612461179b464bd15df6fc0\" returns successfully"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.421642332Z" level=info msg="TearDown network for sandbox \"36f4a924d3a51821cfa3ae4afcf997fec86c2d8b5e042375ae3d1eb409866883\" successfully"
	Jul 17 20:18:24 addons-911602 containerd[744]: time="2023-07-17T20:18:24.421862894Z" level=info msg="StopPodSandbox for \"36f4a924d3a51821cfa3ae4afcf997fec86c2d8b5e042375ae3d1eb409866883\" returns successfully"
	Jul 17 20:18:25 addons-911602 containerd[744]: time="2023-07-17T20:18:25.200716400Z" level=info msg="RemoveContainer for \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\""
	Jul 17 20:18:25 addons-911602 containerd[744]: time="2023-07-17T20:18:25.211918435Z" level=info msg="RemoveContainer for \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\" returns successfully"
	Jul 17 20:18:25 addons-911602 containerd[744]: time="2023-07-17T20:18:25.213670676Z" level=error msg="ContainerStatus for \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\": not found"
	Jul 17 20:18:25 addons-911602 containerd[744]: time="2023-07-17T20:18:25.215059995Z" level=info msg="RemoveContainer for \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\""
	Jul 17 20:18:25 addons-911602 containerd[744]: time="2023-07-17T20:18:25.221185567Z" level=info msg="RemoveContainer for \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\" returns successfully"
	Jul 17 20:18:25 addons-911602 containerd[744]: time="2023-07-17T20:18:25.222227382Z" level=error msg="ContainerStatus for \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\": not found"
	
	* 
	* ==> coredns [fcddb68270014f562d397e830bb11cce4b5c7b8446168f46510f75bb3febcf39] <==
	* [INFO] 10.244.0.12:48274 - 28257 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000220596s
	[INFO] 10.244.0.12:42653 - 2453 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004084461s
	[INFO] 10.244.0.12:48274 - 52106 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004302685s
	[INFO] 10.244.0.12:48274 - 61584 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00207988s
	[INFO] 10.244.0.12:42653 - 6133 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00285291s
	[INFO] 10.244.0.12:42653 - 7527 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135032s
	[INFO] 10.244.0.12:48274 - 31786 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103311s
	[INFO] 10.244.0.12:50618 - 22476 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099274s
	[INFO] 10.244.0.12:50618 - 16033 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065789s
	[INFO] 10.244.0.12:50618 - 5611 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079582s
	[INFO] 10.244.0.12:50618 - 16203 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061375s
	[INFO] 10.244.0.12:50618 - 64716 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061572s
	[INFO] 10.244.0.12:50618 - 26368 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066314s
	[INFO] 10.244.0.12:50618 - 64755 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001202669s
	[INFO] 10.244.0.12:55705 - 51607 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000847763s
	[INFO] 10.244.0.12:55705 - 46384 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000466043s
	[INFO] 10.244.0.12:55705 - 33981 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000672s
	[INFO] 10.244.0.12:50618 - 26299 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001353126s
	[INFO] 10.244.0.12:55705 - 48251 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068652s
	[INFO] 10.244.0.12:55705 - 13983 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000789s
	[INFO] 10.244.0.12:55705 - 44781 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000186528s
	[INFO] 10.244.0.12:50618 - 25035 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069596s
	[INFO] 10.244.0.12:55705 - 9325 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001249667s
	[INFO] 10.244.0.12:55705 - 31822 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00097025s
	[INFO] 10.244.0.12:55705 - 32667 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104894s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-911602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-911602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=addons-911602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T20_15_54_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-911602
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:15:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-911602
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:18:25 +0000   Mon, 17 Jul 2023 20:15:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:18:25 +0000   Mon, 17 Jul 2023 20:15:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:18:25 +0000   Mon, 17 Jul 2023 20:15:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:18:25 +0000   Mon, 17 Jul 2023 20:16:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-911602
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb4d2491d80e4376a3af907d98ed1f1e
	  System UUID:                15a7a9f0-3b1b-4062-ad7b-211b99bed459
	  Boot ID:                    cbdc664b-32f3-4468-95d3-fdbd4fe2a3f0
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-7fmls         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  gcp-auth                    gcp-auth-58478865f7-ml2fx                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  headlamp                    headlamp-66f6498c69-lrwdw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 coredns-5d78c9869d-5zj98                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m24s
	  kube-system                 etcd-addons-911602                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-449vf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m24s
	  kube-system                 kube-apiserver-addons-911602             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-controller-manager-addons-911602    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-proxy-z5m9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-scheduler-addons-911602             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m45s (x8 over 2m45s)  kubelet          Node addons-911602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s (x8 over 2m45s)  kubelet          Node addons-911602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s (x7 over 2m45s)  kubelet          Node addons-911602 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m37s                  kubelet          Node addons-911602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s                  kubelet          Node addons-911602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s                  kubelet          Node addons-911602 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m37s                  kubelet          Node addons-911602 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m27s                  kubelet          Node addons-911602 status is now: NodeReady
	  Normal  RegisteredNode           2m24s                  node-controller  Node addons-911602 event: Registered Node addons-911602 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001063] FS-Cache: O-key=[8] '7f70ed0000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=000000000634e0be
	[  +0.001055] FS-Cache: N-key=[8] '7f70ed0000000000'
	[  +0.003218] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000029 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=000000009d0a711b
	[  +0.001085] FS-Cache: O-key=[8] '7f70ed0000000000'
	[  +0.000701] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000f945483a
	[  +0.001066] FS-Cache: N-key=[8] '7f70ed0000000000'
	[  +2.901901] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=0000000032dcb531
	[  +0.001173] FS-Cache: O-key=[8] '7e70ed0000000000'
	[  +0.000773] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001012] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=000000000634e0be
	[  +0.001123] FS-Cache: N-key=[8] '7e70ed0000000000'
	[  +0.313793] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000963] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000a176162d
	[  +0.001112] FS-Cache: O-key=[8] '8470ed0000000000'
	[  +0.000713] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=0000000099715709
	[  +0.001033] FS-Cache: N-key=[8] '8470ed0000000000'
	
	* 
	* ==> etcd [1b5e8a5a37cd0e760ad46b19217d59f3e0e274e519dc73ee6df90e73dc429374] <==
	* {"level":"info","ts":"2023-07-17T20:15:46.028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-07-17T20:15:46.028Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-07-17T20:15:46.030Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T20:15:46.030Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T20:15:46.030Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T20:15:46.036Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-17T20:15:46.036Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-17T20:15:46.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T20:15:46.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T20:15:46.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-07-17T20:15:46.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T20:15:46.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T20:15:46.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-07-17T20:15:46.600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T20:15:46.605Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-911602 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T20:15:46.605Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:15:46.605Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:15:46.607Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T20:15:46.607Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:15:46.622Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-07-17T20:15:46.622Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:15:46.622Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:15:46.623Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:15:46.623Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T20:15:46.623Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [b5b802bb564c3d55890ffd65689e846f605ddcfdfdb729ece55226b8adb9badd] <==
	* 2023/07/17 20:17:06 GCP Auth Webhook started!
	2023/07/17 20:17:24 Ready to marshal response ...
	2023/07/17 20:17:24 Ready to write response ...
	2023/07/17 20:17:24 Ready to marshal response ...
	2023/07/17 20:17:24 Ready to write response ...
	2023/07/17 20:17:24 Ready to marshal response ...
	2023/07/17 20:17:24 Ready to write response ...
	2023/07/17 20:17:27 Ready to marshal response ...
	2023/07/17 20:17:27 Ready to write response ...
	2023/07/17 20:17:49 Ready to marshal response ...
	2023/07/17 20:17:49 Ready to write response ...
	2023/07/17 20:17:52 Ready to marshal response ...
	2023/07/17 20:17:52 Ready to write response ...
	2023/07/17 20:18:04 Ready to marshal response ...
	2023/07/17 20:18:04 Ready to write response ...
	2023/07/17 20:18:09 Ready to marshal response ...
	2023/07/17 20:18:09 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:18:30 up  4:00,  0 users,  load average: 1.73, 2.06, 2.24
	Linux addons-911602 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [22e91f35db18d56bf02a3b578523383f4e64e80cd546ff6a9a240955efdfa46c] <==
	* I0717 20:16:37.861088       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0717 20:16:37.876285       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:16:37.876320       1 main.go:227] handling current node
	I0717 20:16:47.892062       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:16:47.892093       1 main.go:227] handling current node
	I0717 20:16:57.905214       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:16:57.905240       1 main.go:227] handling current node
	I0717 20:17:07.910325       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:17:07.910355       1 main.go:227] handling current node
	I0717 20:17:17.914103       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:17:17.914133       1 main.go:227] handling current node
	I0717 20:17:27.918152       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:17:27.918185       1 main.go:227] handling current node
	I0717 20:17:37.931677       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:17:37.931886       1 main.go:227] handling current node
	I0717 20:17:47.936092       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:17:47.936122       1 main.go:227] handling current node
	I0717 20:17:57.946754       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:17:57.946781       1 main.go:227] handling current node
	I0717 20:18:07.958068       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:18:07.958259       1 main.go:227] handling current node
	I0717 20:18:17.971097       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:18:17.971125       1 main.go:227] handling current node
	I0717 20:18:27.984277       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:18:27.984310       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6319c4253001d4daae73f6122133743ff08a5b4db201e624bb9278fe5ce1b4c1] <==
	* E0717 20:18:03.131327       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 20:18:03.131417       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 20:18:03.149374       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0717 20:18:04.946269       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.101.161.134]
	E0717 20:18:09.956208       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[system node-high leader-election workload-high workload-low global-default catch-all] items=[{target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649} {target:NaN lowerBound:13 upperBound:613}]
	E0717 20:18:19.957037       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[system node-high leader-election workload-high workload-low global-default catch-all] items=[{target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649} {target:NaN lowerBound:13 upperBound:613}]
	I0717 20:18:23.907016       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 20:18:23.907061       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 20:18:23.943453       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 20:18:23.943767       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 20:18:23.972139       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 20:18:23.972501       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 20:18:24.035518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 20:18:24.035904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 20:18:24.053487       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 20:18:24.054601       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 20:18:24.080820       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 20:18:24.082485       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 20:18:24.099551       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 20:18:24.099604       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 20:18:25.055349       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 20:18:25.101005       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 20:18:25.113888       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0717 20:18:29.957941       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[workload-low global-default catch-all system node-high leader-election workload-high] items=[{target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649} {target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698}]
	
	* 
	* ==> kube-controller-manager [d674a1c76821f237ac7e7e62cfdd4c2c4fba4fb21256afa98beb96bb20a0a430] <==
	* I0717 20:18:06.485712       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 20:18:06.836898       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0717 20:18:06.836949       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 20:18:08.110886       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"hostpath.csi.k8s.io\" or manually created by system administrator"
	I0717 20:18:17.313369       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0717 20:18:17.399839       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I0717 20:18:21.862659       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 20:18:21.868715       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0717 20:18:22.998121       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:22.998159       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:25.057592       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:25.103572       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:25.116768       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 20:18:26.121261       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:26.121296       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 20:18:26.269429       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:26.269464       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 20:18:26.571584       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:26.571619       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 20:18:28.898801       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:28.898840       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 20:18:28.943606       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:28.943645       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 20:18:29.165675       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 20:18:29.165710       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [88e22cf011e9060566b9cdbae182e3a206af8315b0f5d737141518f66d06c3c5] <==
	* I0717 20:16:07.549701       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0717 20:16:07.549817       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0717 20:16:07.549840       1 server_others.go:554] "Using iptables proxy"
	I0717 20:16:07.601253       1 server_others.go:192] "Using iptables Proxier"
	I0717 20:16:07.601283       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 20:16:07.601292       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 20:16:07.601308       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 20:16:07.601375       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 20:16:07.601941       1 server.go:658] "Version info" version="v1.27.3"
	I0717 20:16:07.601952       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 20:16:07.603250       1 config.go:188] "Starting service config controller"
	I0717 20:16:07.603262       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 20:16:07.603279       1 config.go:97] "Starting endpoint slice config controller"
	I0717 20:16:07.603283       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 20:16:07.603607       1 config.go:315] "Starting node config controller"
	I0717 20:16:07.603614       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 20:16:07.703826       1 shared_informer.go:318] Caches are synced for node config
	I0717 20:16:07.703864       1 shared_informer.go:318] Caches are synced for service config
	I0717 20:16:07.703913       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [84b71f23b55a7c8601fab294c432e5379a3ee232a12b3bc5069e10c8ceabb2a2] <==
	* W0717 20:15:49.908699       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:15:49.909777       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:15:49.908790       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 20:15:49.908877       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:15:49.908942       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:15:49.908994       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:15:49.909061       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 20:15:49.909134       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:15:49.909181       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:15:49.909246       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:15:49.909675       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:15:49.909741       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 20:15:50.814905       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:15:50.815187       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 20:15:50.859724       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:15:50.859952       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 20:15:50.873408       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 20:15:50.873626       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 20:15:50.941147       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:15:50.941370       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 20:15:50.984250       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:15:50.984471       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 20:15:51.017972       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:15:51.018014       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0717 20:15:51.494881       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 20:18:23 addons-911602 kubelet[1342]: E0717 20:18:23.178864    1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1dad4bfd50d80f621a9a051fedfc00cca9c475d7b8b5b27da8113b9cabb5b5f\": not found" containerID="c1dad4bfd50d80f621a9a051fedfc00cca9c475d7b8b5b27da8113b9cabb5b5f"
	Jul 17 20:18:23 addons-911602 kubelet[1342]: I0717 20:18:23.178910    1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c1dad4bfd50d80f621a9a051fedfc00cca9c475d7b8b5b27da8113b9cabb5b5f} err="failed to get container status \"c1dad4bfd50d80f621a9a051fedfc00cca9c475d7b8b5b27da8113b9cabb5b5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1dad4bfd50d80f621a9a051fedfc00cca9c475d7b8b5b27da8113b9cabb5b5f\": not found"
	Jul 17 20:18:23 addons-911602 kubelet[1342]: I0717 20:18:23.250176    1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmkb\" (UniqueName: \"kubernetes.io/projected/d06e007d-0d4d-4c8f-8eda-c74f49d1bc85-kube-api-access-vcmkb\") pod \"d06e007d-0d4d-4c8f-8eda-c74f49d1bc85\" (UID: \"d06e007d-0d4d-4c8f-8eda-c74f49d1bc85\") "
	Jul 17 20:18:23 addons-911602 kubelet[1342]: I0717 20:18:23.250691    1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d06e007d-0d4d-4c8f-8eda-c74f49d1bc85-webhook-cert\") pod \"d06e007d-0d4d-4c8f-8eda-c74f49d1bc85\" (UID: \"d06e007d-0d4d-4c8f-8eda-c74f49d1bc85\") "
	Jul 17 20:18:23 addons-911602 kubelet[1342]: I0717 20:18:23.262720    1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d06e007d-0d4d-4c8f-8eda-c74f49d1bc85-kube-api-access-vcmkb" (OuterVolumeSpecName: "kube-api-access-vcmkb") pod "d06e007d-0d4d-4c8f-8eda-c74f49d1bc85" (UID: "d06e007d-0d4d-4c8f-8eda-c74f49d1bc85"). InnerVolumeSpecName "kube-api-access-vcmkb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 20:18:23 addons-911602 kubelet[1342]: I0717 20:18:23.270717    1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d06e007d-0d4d-4c8f-8eda-c74f49d1bc85-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d06e007d-0d4d-4c8f-8eda-c74f49d1bc85" (UID: "d06e007d-0d4d-4c8f-8eda-c74f49d1bc85"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 20:18:23 addons-911602 kubelet[1342]: I0717 20:18:23.351361    1342 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d06e007d-0d4d-4c8f-8eda-c74f49d1bc85-webhook-cert\") on node \"addons-911602\" DevicePath \"\""
	Jul 17 20:18:23 addons-911602 kubelet[1342]: I0717 20:18:23.351402    1342 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vcmkb\" (UniqueName: \"kubernetes.io/projected/d06e007d-0d4d-4c8f-8eda-c74f49d1bc85-kube-api-access-vcmkb\") on node \"addons-911602\" DevicePath \"\""
	Jul 17 20:18:24 addons-911602 kubelet[1342]: I0717 20:18:24.564261    1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh8cj\" (UniqueName: \"kubernetes.io/projected/d10b8203-41f8-4743-a170-b0f69574fddb-kube-api-access-mh8cj\") pod \"d10b8203-41f8-4743-a170-b0f69574fddb\" (UID: \"d10b8203-41f8-4743-a170-b0f69574fddb\") "
	Jul 17 20:18:24 addons-911602 kubelet[1342]: I0717 20:18:24.564326    1342 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfd2x\" (UniqueName: \"kubernetes.io/projected/b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f-kube-api-access-dfd2x\") pod \"b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f\" (UID: \"b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f\") "
	Jul 17 20:18:24 addons-911602 kubelet[1342]: I0717 20:18:24.566671    1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f-kube-api-access-dfd2x" (OuterVolumeSpecName: "kube-api-access-dfd2x") pod "b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f" (UID: "b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f"). InnerVolumeSpecName "kube-api-access-dfd2x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 20:18:24 addons-911602 kubelet[1342]: I0717 20:18:24.566763    1342 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10b8203-41f8-4743-a170-b0f69574fddb-kube-api-access-mh8cj" (OuterVolumeSpecName: "kube-api-access-mh8cj") pod "d10b8203-41f8-4743-a170-b0f69574fddb" (UID: "d10b8203-41f8-4743-a170-b0f69574fddb"). InnerVolumeSpecName "kube-api-access-mh8cj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 20:18:24 addons-911602 kubelet[1342]: I0717 20:18:24.665419    1342 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dfd2x\" (UniqueName: \"kubernetes.io/projected/b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f-kube-api-access-dfd2x\") on node \"addons-911602\" DevicePath \"\""
	Jul 17 20:18:24 addons-911602 kubelet[1342]: I0717 20:18:24.665631    1342 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mh8cj\" (UniqueName: \"kubernetes.io/projected/d10b8203-41f8-4743-a170-b0f69574fddb-kube-api-access-mh8cj\") on node \"addons-911602\" DevicePath \"\""
	Jul 17 20:18:25 addons-911602 kubelet[1342]: I0717 20:18:25.064719    1342 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d06e007d-0d4d-4c8f-8eda-c74f49d1bc85 path="/var/lib/kubelet/pods/d06e007d-0d4d-4c8f-8eda-c74f49d1bc85/volumes"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: I0717 20:18:25.198649    1342 scope.go:115] "RemoveContainer" containerID="afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: I0717 20:18:25.212325    1342 scope.go:115] "RemoveContainer" containerID="afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: E0717 20:18:25.213881    1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\": not found" containerID="afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: I0717 20:18:25.213918    1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6} err="failed to get container status \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"afe898db4eacd5879de47545009b06f408b2338a7b4013b21bfb3589b22bcbb6\": not found"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: I0717 20:18:25.213934    1342 scope.go:115] "RemoveContainer" containerID="812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: I0717 20:18:25.221855    1342 scope.go:115] "RemoveContainer" containerID="812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: E0717 20:18:25.222463    1342 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\": not found" containerID="812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288"
	Jul 17 20:18:25 addons-911602 kubelet[1342]: I0717 20:18:25.222514    1342 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288} err="failed to get container status \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\": rpc error: code = NotFound desc = an error occurred when try to find container \"812d8b128a5b926d2173ccc9c9430cd4fa8f09bc0ece71962e17103b7b87e288\": not found"
	Jul 17 20:18:27 addons-911602 kubelet[1342]: I0717 20:18:27.066366    1342 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f path="/var/lib/kubelet/pods/b3b1b1c4-2f5f-49f8-8a89-8af9e709a14f/volumes"
	Jul 17 20:18:27 addons-911602 kubelet[1342]: I0717 20:18:27.066995    1342 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d10b8203-41f8-4743-a170-b0f69574fddb path="/var/lib/kubelet/pods/d10b8203-41f8-4743-a170-b0f69574fddb/volumes"
	
	* 
	* ==> storage-provisioner [872726abe13a917aaa11eb43d40fd7c67015ff1699b64ac7e7daed717fa715c5] <==
	* I0717 20:16:11.827716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 20:16:11.886073       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 20:16:11.886162       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 20:16:11.896419       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 20:16:11.896529       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70c18c55-c2b6-423f-a116-811724ccac27", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-911602_fc727d56-140f-450b-8c54-886ad53ccc87 became leader
	I0717 20:16:11.898629       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-911602_fc727d56-140f-450b-8c54-886ad53ccc87!
	I0717 20:16:11.999206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-911602_fc727d56-140f-450b-8c54-886ad53ccc87!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-911602 -n addons-911602
helpers_test.go:261: (dbg) Run:  kubectl --context addons-911602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (40.11s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.25s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-035354 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-035354 --driver=docker  --container-runtime=containerd: (30.153907925s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-035354"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-035354": (1.349162094s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UagShbDlkVNc/agent.919734" SSH_AGENT_PID="919735" DOCKER_HOST=ssh://docker@127.0.0.1:33720 docker version"
docker_test.go:220: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-UagShbDlkVNc/agent.919734" SSH_AGENT_PID="919735" DOCKER_HOST=ssh://docker@127.0.0.1:33720 docker version": exit status 1 (277.358485ms)

                                                
                                                
-- stdout --
	Client: Docker Engine - Community
	 Version:           24.0.4
	 API version:       1.43
	 Go version:        go1.20.5
	 Git commit:        3713ee1
	 Built:             Fri Jul  7 14:50:52 2023
	 OS/Arch:           linux/arm64
	 Context:           default

                                                
                                                
-- /stdout --
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 33720 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
	@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
	Someone could be eavesdropping on you right now (man-in-the-middle attack)!
	It is also possible that a host key has just been changed.
	The fingerprint for the RSA key sent by the remote host is
	SHA256:rnKjLoScJJwTletwusPNkv1PJbBy/KUwU5I2Bd9PZtg.
	Please contact your system administrator.
	Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
	Offending RSA key in /home/jenkins/.ssh/known_hosts:8
	  remove with:
	  ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:33720"
	RSA host key for [127.0.0.1]:33720 has changed and you have requested strict checking.
	Host key verification failed.
	

                                                
                                                
** /stderr **
docker_test.go:222: failed to execute 'docker version', error: exit status 1, output: 
-- stdout --
	Client: Docker Engine - Community
	 Version:           24.0.4
	 API version:       1.43
	 Go version:        go1.20.5
	 Git commit:        3713ee1
	 Built:             Fri Jul  7 14:50:52 2023
	 OS/Arch:           linux/arm64
	 Context:           default

                                                
                                                
-- /stdout --
** stderr ** 
	error during connect: Get "http://docker.example.com/v1.24/version": command [ssh -o ConnectTimeout=30 -l docker -p 33720 -- 127.0.0.1 docker system dial-stdio] has exited with exit status 255, please make sure the URL is valid, and Docker 18.09 or later is installed on the remote host: stderr=@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
	@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
	IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
	Someone could be eavesdropping on you right now (man-in-the-middle attack)!
	It is also possible that a host key has just been changed.
	The fingerprint for the RSA key sent by the remote host is
	SHA256:rnKjLoScJJwTletwusPNkv1PJbBy/KUwU5I2Bd9PZtg.
	Please contact your system administrator.
	Add correct host key in /home/jenkins/.ssh/known_hosts to get rid of this message.
	Offending RSA key in /home/jenkins/.ssh/known_hosts:8
	  remove with:
	  ssh-keygen -f "/home/jenkins/.ssh/known_hosts" -R "[127.0.0.1]:33720"
	RSA host key for [127.0.0.1]:33720 has changed and you have requested strict checking.
	Host key verification failed.
	

                                                
                                                
** /stderr **
panic.go:522: *** TestDockerEnvContainerd FAILED at 2023-07-17 20:19:51.216431602 +0000 UTC m=+316.909913852
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect dockerenv-035354
helpers_test.go:235: (dbg) docker inspect dockerenv-035354:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff3978c829275d5ea6fd817f34e6fbc24a277aa7ab2f10dd149b7a84ae122e7b",
	        "Created": "2023-07-17T20:19:14.90330256Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 917949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:19:15.301571161Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/ff3978c829275d5ea6fd817f34e6fbc24a277aa7ab2f10dd149b7a84ae122e7b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff3978c829275d5ea6fd817f34e6fbc24a277aa7ab2f10dd149b7a84ae122e7b/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff3978c829275d5ea6fd817f34e6fbc24a277aa7ab2f10dd149b7a84ae122e7b/hosts",
	        "LogPath": "/var/lib/docker/containers/ff3978c829275d5ea6fd817f34e6fbc24a277aa7ab2f10dd149b7a84ae122e7b/ff3978c829275d5ea6fd817f34e6fbc24a277aa7ab2f10dd149b7a84ae122e7b-json.log",
	        "Name": "/dockerenv-035354",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-035354:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "dockerenv-035354",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9478cc05a2da6b2616ba55256c9e9f1c987045d59e6a6679aa72053891b62ed0-init/diff:/var/lib/docker/overlay2/7007f4a8945aebd939b8429923b1b654b284bda949467104beab22408cb6f264/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9478cc05a2da6b2616ba55256c9e9f1c987045d59e6a6679aa72053891b62ed0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9478cc05a2da6b2616ba55256c9e9f1c987045d59e6a6679aa72053891b62ed0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9478cc05a2da6b2616ba55256c9e9f1c987045d59e6a6679aa72053891b62ed0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-035354",
	                "Source": "/var/lib/docker/volumes/dockerenv-035354/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-035354",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-035354",
	                "name.minikube.sigs.k8s.io": "dockerenv-035354",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90cdc16e399b039a06ff1ea16dc9055e6e28df334e6f2856c10593cdfe691c7f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33720"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33719"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33716"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33718"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33717"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/90cdc16e399b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-035354": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ff3978c82927",
	                        "dockerenv-035354"
	                    ],
	                    "NetworkID": "5f4d103456c387af3bafe0e541b7594ca8a3d6c9d965fb6d1c9d4c5bd41a3c89",
	                    "EndpointID": "e918ce7b2a51ebd5e605833de80902fe976109bd9239fa396b983344fb06f407",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p dockerenv-035354 -n dockerenv-035354
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p dockerenv-035354 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p dockerenv-035354 logs -n 25: (1.351061335s)
helpers_test.go:252: TestDockerEnvContainerd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start      | --download-only -p             | download-docker-219942 | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC |                     |
	|            | download-docker-219942         |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete     | -p download-docker-219942      | download-docker-219942 | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:15 UTC |
	| start      | --download-only -p             | binary-mirror-408945   | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC |                     |
	|            | binary-mirror-408945           |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --binary-mirror                |                        |         |         |                     |                     |
	|            | http://127.0.0.1:34645         |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| delete     | -p binary-mirror-408945        | binary-mirror-408945   | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:15 UTC |
	| start      | -p addons-911602               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:15 UTC | 17 Jul 23 20:17 UTC |
	|            | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|            | --alsologtostderr              |                        |         |         |                     |                     |
	|            | --addons=registry              |                        |         |         |                     |                     |
	|            | --addons=metrics-server        |                        |         |         |                     |                     |
	|            | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|            | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|            | --addons=gcp-auth              |                        |         |         |                     |                     |
	|            | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|            | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	|            | --addons=ingress               |                        |         |         |                     |                     |
	|            | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons     | disable cloud-spanner -p       | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|            | addons-911602                  |                        |         |         |                     |                     |
	| addons     | enable headlamp                | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|            | -p addons-911602               |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip         | addons-911602 ip               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	| addons     | addons-911602 addons disable   | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|            | registry --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-911602 addons           | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|            | disable metrics-server         |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | disable inspektor-gadget -p    | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:17 UTC | 17 Jul 23 20:17 UTC |
	|            | addons-911602                  |                        |         |         |                     |                     |
	| ssh        | addons-911602 ssh curl -s      | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|            | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|            | nginx.example.com'             |                        |         |         |                     |                     |
	| ip         | addons-911602 ip               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	| addons     | addons-911602 addons           | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|            | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-911602 addons disable   | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|            | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| addons     | addons-911602 addons disable   | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|            | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons     | addons-911602 addons           | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|            | disable volumesnapshots        |                        |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons     | addons-911602 addons disable   | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:18 UTC |
	|            | gcp-auth --alsologtostderr     |                        |         |         |                     |                     |
	|            | -v=1                           |                        |         |         |                     |                     |
	| stop       | -p addons-911602               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:18 UTC | 17 Jul 23 20:19 UTC |
	| addons     | enable dashboard -p            | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:19 UTC | 17 Jul 23 20:19 UTC |
	|            | addons-911602                  |                        |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:19 UTC | 17 Jul 23 20:19 UTC |
	|            | addons-911602                  |                        |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:19 UTC | 17 Jul 23 20:19 UTC |
	|            | addons-911602                  |                        |         |         |                     |                     |
	| delete     | -p addons-911602               | addons-911602          | jenkins | v1.30.1 | 17 Jul 23 20:19 UTC | 17 Jul 23 20:19 UTC |
	| start      | -p dockerenv-035354            | dockerenv-035354       | jenkins | v1.30.1 | 17 Jul 23 20:19 UTC | 17 Jul 23 20:19 UTC |
	|            | --driver=docker                |                        |         |         |                     |                     |
	|            | --container-runtime=containerd |                        |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-035354       | jenkins | v1.30.1 | 17 Jul 23 20:19 UTC | 17 Jul 23 20:19 UTC |
	|            | dockerenv-035354               |                        |         |         |                     |                     |
	|------------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:19:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:19:09.479461  917496 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:19:09.485662  917496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:19:09.485669  917496 out.go:309] Setting ErrFile to fd 2...
	I0717 20:19:09.485675  917496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:19:09.486082  917496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:19:09.486594  917496 out.go:303] Setting JSON to false
	I0717 20:19:09.488085  917496 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14497,"bootTime":1689610653,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:19:09.488153  917496 start.go:138] virtualization:  
	I0717 20:19:09.490691  917496 out.go:177] * [dockerenv-035354] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:19:09.493092  917496 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:19:09.495100  917496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:19:09.493258  917496 notify.go:220] Checking for updates...
	I0717 20:19:09.499545  917496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:19:09.501383  917496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:19:09.503480  917496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:19:09.505543  917496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:19:09.508104  917496 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:19:09.532574  917496 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:19:09.532692  917496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:19:09.614530  917496 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 20:19:09.603818203 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:19:09.614630  917496 docker.go:294] overlay module found
	I0717 20:19:09.616911  917496 out.go:177] * Using the docker driver based on user configuration
	I0717 20:19:09.619007  917496 start.go:298] selected driver: docker
	I0717 20:19:09.619016  917496 start.go:880] validating driver "docker" against <nil>
	I0717 20:19:09.619027  917496 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:19:09.619147  917496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:19:09.689227  917496 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 20:19:09.679776669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:19:09.689390  917496 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:19:09.689666  917496 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 20:19:09.689819  917496 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 20:19:09.692364  917496 out.go:177] * Using Docker driver with root privileges
	I0717 20:19:09.694591  917496 cni.go:84] Creating CNI manager for ""
	I0717 20:19:09.694607  917496 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:19:09.694619  917496 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 20:19:09.694630  917496 start_flags.go:319] config:
	{Name:dockerenv-035354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-035354 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:19:09.697252  917496 out.go:177] * Starting control plane node dockerenv-035354 in cluster dockerenv-035354
	I0717 20:19:09.699384  917496 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 20:19:09.701645  917496 out.go:177] * Pulling base image ...
	I0717 20:19:09.703385  917496 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:19:09.703436  917496 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	I0717 20:19:09.703445  917496 cache.go:57] Caching tarball of preloaded images
	I0717 20:19:09.703469  917496 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 20:19:09.703545  917496 preload.go:174] Found /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 20:19:09.703560  917496 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0717 20:19:09.704056  917496 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/config.json ...
	I0717 20:19:09.704081  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/config.json: {Name:mked3c2c5bf6850a1258b6678184bb736e443055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:09.720550  917496 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 20:19:09.720564  917496 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 20:19:09.720582  917496 cache.go:195] Successfully downloaded all kic artifacts
	I0717 20:19:09.720639  917496 start.go:365] acquiring machines lock for dockerenv-035354: {Name:mk8143d529dafb8829ca70e545a474d66842e8a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:19:09.720751  917496 start.go:369] acquired machines lock for "dockerenv-035354" in 95.442µs
	I0717 20:19:09.720775  917496 start.go:93] Provisioning new machine with config: &{Name:dockerenv-035354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-035354 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 20:19:09.720897  917496 start.go:125] createHost starting for "" (driver="docker")
	I0717 20:19:09.723201  917496 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 20:19:09.723443  917496 start.go:159] libmachine.API.Create for "dockerenv-035354" (driver="docker")
	I0717 20:19:09.723490  917496 client.go:168] LocalClient.Create starting
	I0717 20:19:09.723561  917496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem
	I0717 20:19:09.723595  917496 main.go:141] libmachine: Decoding PEM data...
	I0717 20:19:09.723608  917496 main.go:141] libmachine: Parsing certificate...
	I0717 20:19:09.723658  917496 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem
	I0717 20:19:09.723673  917496 main.go:141] libmachine: Decoding PEM data...
	I0717 20:19:09.723687  917496 main.go:141] libmachine: Parsing certificate...
	I0717 20:19:09.724059  917496 cli_runner.go:164] Run: docker network inspect dockerenv-035354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 20:19:09.741157  917496 cli_runner.go:211] docker network inspect dockerenv-035354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 20:19:09.741229  917496 network_create.go:281] running [docker network inspect dockerenv-035354] to gather additional debugging logs...
	I0717 20:19:09.741246  917496 cli_runner.go:164] Run: docker network inspect dockerenv-035354
	W0717 20:19:09.757859  917496 cli_runner.go:211] docker network inspect dockerenv-035354 returned with exit code 1
	I0717 20:19:09.757880  917496 network_create.go:284] error running [docker network inspect dockerenv-035354]: docker network inspect dockerenv-035354: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-035354 not found
	I0717 20:19:09.757891  917496 network_create.go:286] output of [docker network inspect dockerenv-035354]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-035354 not found
	
	** /stderr **
	I0717 20:19:09.757962  917496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:19:09.778704  917496 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001155f90}
	I0717 20:19:09.778737  917496 network_create.go:123] attempt to create docker network dockerenv-035354 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 20:19:09.778797  917496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-035354 dockerenv-035354
	I0717 20:19:09.851279  917496 network_create.go:107] docker network dockerenv-035354 192.168.49.0/24 created
	I0717 20:19:09.851299  917496 kic.go:117] calculated static IP "192.168.49.2" for the "dockerenv-035354" container
	I0717 20:19:09.851376  917496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 20:19:09.867783  917496 cli_runner.go:164] Run: docker volume create dockerenv-035354 --label name.minikube.sigs.k8s.io=dockerenv-035354 --label created_by.minikube.sigs.k8s.io=true
	I0717 20:19:09.886457  917496 oci.go:103] Successfully created a docker volume dockerenv-035354
	I0717 20:19:09.886543  917496 cli_runner.go:164] Run: docker run --rm --name dockerenv-035354-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-035354 --entrypoint /usr/bin/test -v dockerenv-035354:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 20:19:10.555078  917496 oci.go:107] Successfully prepared a docker volume dockerenv-035354
	I0717 20:19:10.555120  917496 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:19:10.555139  917496 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 20:19:10.555233  917496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-035354:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 20:19:14.817410  917496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-035354:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.262141112s)
	I0717 20:19:14.817444  917496 kic.go:199] duration metric: took 4.262301 seconds to extract preloaded images to volume
	W0717 20:19:14.817780  917496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 20:19:14.817888  917496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 20:19:14.887297  917496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-035354 --name dockerenv-035354 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-035354 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-035354 --network dockerenv-035354 --ip 192.168.49.2 --volume dockerenv-035354:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 20:19:15.309619  917496 cli_runner.go:164] Run: docker container inspect dockerenv-035354 --format={{.State.Running}}
	I0717 20:19:15.342722  917496 cli_runner.go:164] Run: docker container inspect dockerenv-035354 --format={{.State.Status}}
	I0717 20:19:15.363849  917496 cli_runner.go:164] Run: docker exec dockerenv-035354 stat /var/lib/dpkg/alternatives/iptables
	I0717 20:19:15.429400  917496 oci.go:144] the created container "dockerenv-035354" has a running status.
	I0717 20:19:15.429418  917496 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa...
	I0717 20:19:15.940229  917496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 20:19:15.992703  917496 cli_runner.go:164] Run: docker container inspect dockerenv-035354 --format={{.State.Status}}
	I0717 20:19:16.022576  917496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 20:19:16.022587  917496 kic_runner.go:114] Args: [docker exec --privileged dockerenv-035354 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 20:19:16.133950  917496 cli_runner.go:164] Run: docker container inspect dockerenv-035354 --format={{.State.Status}}
	I0717 20:19:16.162345  917496 machine.go:88] provisioning docker machine ...
	I0717 20:19:16.162369  917496 ubuntu.go:169] provisioning hostname "dockerenv-035354"
	I0717 20:19:16.162433  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:16.187736  917496 main.go:141] libmachine: Using SSH client type: native
	I0717 20:19:16.188202  917496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I0717 20:19:16.188213  917496 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-035354 && echo "dockerenv-035354" | sudo tee /etc/hostname
	I0717 20:19:16.387571  917496 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-035354
	
	I0717 20:19:16.387654  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:16.409729  917496 main.go:141] libmachine: Using SSH client type: native
	I0717 20:19:16.410239  917496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33720 <nil> <nil>}
	I0717 20:19:16.410255  917496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-035354' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-035354/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-035354' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:19:16.550288  917496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:19:16.550303  917496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:19:16.550329  917496 ubuntu.go:177] setting up certificates
	I0717 20:19:16.550338  917496 provision.go:83] configureAuth start
	I0717 20:19:16.550394  917496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-035354
	I0717 20:19:16.577879  917496 provision.go:138] copyHostCerts
	I0717 20:19:16.577937  917496 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem, removing ...
	I0717 20:19:16.577944  917496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:19:16.578023  917496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:19:16.578108  917496 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem, removing ...
	I0717 20:19:16.578112  917496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:19:16.578137  917496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:19:16.578187  917496 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem, removing ...
	I0717 20:19:16.578190  917496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:19:16.578211  917496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:19:16.578252  917496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.dockerenv-035354 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube dockerenv-035354]
	I0717 20:19:17.502190  917496 provision.go:172] copyRemoteCerts
	I0717 20:19:17.502272  917496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:19:17.502319  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:17.520706  917496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa Username:docker}
	I0717 20:19:17.619728  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:19:17.648683  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 20:19:17.677406  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 20:19:17.706289  917496 provision.go:86] duration metric: configureAuth took 1.155939076s
	I0717 20:19:17.706304  917496 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:19:17.706485  917496 config.go:182] Loaded profile config "dockerenv-035354": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:19:17.706490  917496 machine.go:91] provisioned docker machine in 1.544135973s
	I0717 20:19:17.706494  917496 client.go:171] LocalClient.Create took 7.98300066s
	I0717 20:19:17.706515  917496 start.go:167] duration metric: libmachine.API.Create for "dockerenv-035354" took 7.983072241s
	I0717 20:19:17.706521  917496 start.go:300] post-start starting for "dockerenv-035354" (driver="docker")
	I0717 20:19:17.706528  917496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:19:17.706585  917496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:19:17.706626  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:17.724429  917496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa Username:docker}
	I0717 20:19:17.820567  917496 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:19:17.825044  917496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:19:17.825071  917496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:19:17.825081  917496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:19:17.825086  917496 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 20:19:17.825095  917496 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:19:17.825158  917496 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:19:17.825179  917496 start.go:303] post-start completed in 118.653224ms
	I0717 20:19:17.825547  917496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-035354
	I0717 20:19:17.843498  917496 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/config.json ...
	I0717 20:19:17.843777  917496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:19:17.843823  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:17.862065  917496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa Username:docker}
	I0717 20:19:17.950904  917496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:19:17.956807  917496 start.go:128] duration metric: createHost completed in 8.23589578s
	I0717 20:19:17.956829  917496 start.go:83] releasing machines lock for "dockerenv-035354", held for 8.236071813s
	I0717 20:19:17.956920  917496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-035354
	I0717 20:19:17.974563  917496 ssh_runner.go:195] Run: cat /version.json
	I0717 20:19:17.974609  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:17.974632  917496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:19:17.974687  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:18.004082  917496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa Username:docker}
	I0717 20:19:18.004782  917496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa Username:docker}
	W0717 20:19:18.098075  917496 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 20:19:18.098165  917496 ssh_runner.go:195] Run: systemctl --version
	I0717 20:19:18.242062  917496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 20:19:18.248020  917496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 20:19:18.279037  917496 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 20:19:18.279108  917496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:19:18.314602  917496 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 20:19:18.314617  917496 start.go:469] detecting cgroup driver to use...
	I0717 20:19:18.314648  917496 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 20:19:18.314699  917496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 20:19:18.329542  917496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 20:19:18.343735  917496 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:19:18.343812  917496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:19:18.360565  917496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:19:18.377214  917496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 20:19:18.488191  917496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:19:18.595768  917496 docker.go:212] disabling docker service ...
	I0717 20:19:18.595825  917496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:19:18.620417  917496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:19:18.635159  917496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:19:18.727302  917496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:19:18.829475  917496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:19:18.843699  917496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:19:18.864191  917496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 20:19:18.877456  917496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 20:19:18.890951  917496 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 20:19:18.891030  917496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 20:19:18.904031  917496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:19:18.917231  917496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 20:19:18.930109  917496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:19:18.943170  917496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:19:18.955762  917496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 20:19:18.968686  917496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:19:18.979419  917496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:19:18.990025  917496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:19:19.081790  917496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:19:19.169992  917496 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 20:19:19.170052  917496 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 20:19:19.174976  917496 start.go:537] Will wait 60s for crictl version
	I0717 20:19:19.175030  917496 ssh_runner.go:195] Run: which crictl
	I0717 20:19:19.180153  917496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:19:19.237040  917496 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0717 20:19:19.237099  917496 ssh_runner.go:195] Run: containerd --version
	I0717 20:19:19.266979  917496 ssh_runner.go:195] Run: containerd --version
	I0717 20:19:19.303261  917496 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0717 20:19:19.304910  917496 cli_runner.go:164] Run: docker network inspect dockerenv-035354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:19:19.322139  917496 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 20:19:19.326831  917496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:19:19.340511  917496 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:19:19.340564  917496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:19:19.384994  917496 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 20:19:19.385006  917496 containerd.go:518] Images already preloaded, skipping extraction
	I0717 20:19:19.385061  917496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:19:19.429086  917496 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 20:19:19.429098  917496 cache_images.go:84] Images are preloaded, skipping loading
	I0717 20:19:19.429167  917496 ssh_runner.go:195] Run: sudo crictl info
	I0717 20:19:19.473839  917496 cni.go:84] Creating CNI manager for ""
	I0717 20:19:19.473850  917496 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:19:19.473860  917496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 20:19:19.473877  917496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-035354 NodeName:dockerenv-035354 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 20:19:19.474043  917496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-035354"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 20:19:19.474108  917496 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=dockerenv-035354 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:dockerenv-035354 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 20:19:19.474172  917496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 20:19:19.485604  917496 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 20:19:19.485680  917496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 20:19:19.496639  917496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0717 20:19:19.518838  917496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 20:19:19.541147  917496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0717 20:19:19.563319  917496 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 20:19:19.567993  917496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:19:19.581991  917496 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354 for IP: 192.168.49.2
	I0717 20:19:19.582013  917496 certs.go:190] acquiring lock for shared ca certs: {Name:mk081da4b0c80820af8357079096999320bef2ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:19.582152  917496 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key
	I0717 20:19:19.582190  917496 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key
	I0717 20:19:19.582238  917496 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/client.key
	I0717 20:19:19.582267  917496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/client.crt with IP's: []
	I0717 20:19:19.845630  917496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/client.crt ...
	I0717 20:19:19.845646  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/client.crt: {Name:mk5bdb64cf452ebd0cf2513d7a98f91e321a7e12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:19.845856  917496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/client.key ...
	I0717 20:19:19.845863  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/client.key: {Name:mk23d9a48b88ef895d1385d12435cf58c84f1a17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:19.846512  917496 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.key.dd3b5fb2
	I0717 20:19:19.846525  917496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 20:19:20.000724  917496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.crt.dd3b5fb2 ...
	I0717 20:19:20.000742  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.crt.dd3b5fb2: {Name:mk49c3d627078a49c37dbe716544a12d5f571d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:20.002310  917496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.key.dd3b5fb2 ...
	I0717 20:19:20.002336  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.key.dd3b5fb2: {Name:mkce65583e2860e233551632ab6e0dae5e141c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:20.002442  917496 certs.go:337] copying /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.crt
	I0717 20:19:20.002513  917496 certs.go:341] copying /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.key
	I0717 20:19:20.002563  917496 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.key
	I0717 20:19:20.002574  917496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.crt with IP's: []
	I0717 20:19:20.157801  917496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.crt ...
	I0717 20:19:20.157816  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.crt: {Name:mk3019774071ff5c23f9ea3669cb88119f4d7ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:20.158020  917496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.key ...
	I0717 20:19:20.158029  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.key: {Name:mk63bce63043699553d7af7281caafddad2c75eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:20.158697  917496 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 20:19:20.158741  917496 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem (1078 bytes)
	I0717 20:19:20.158766  917496 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem (1123 bytes)
	I0717 20:19:20.158788  917496 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem (1675 bytes)
	I0717 20:19:20.159392  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 20:19:20.195216  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 20:19:20.226831  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 20:19:20.257263  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/dockerenv-035354/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 20:19:20.287992  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 20:19:20.318569  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 20:19:20.348176  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 20:19:20.377895  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 20:19:20.408278  917496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 20:19:20.438553  917496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 20:19:20.460424  917496 ssh_runner.go:195] Run: openssl version
	I0717 20:19:20.467878  917496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 20:19:20.480256  917496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:19:20.485231  917496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 20:15 /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:19:20.485293  917496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:19:20.494334  917496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 20:19:20.506312  917496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 20:19:20.510659  917496 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 20:19:20.510698  917496 kubeadm.go:404] StartCluster: {Name:dockerenv-035354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:dockerenv-035354 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0}
	I0717 20:19:20.510767  917496 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 20:19:20.510820  917496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:19:20.553686  917496 cri.go:89] found id: ""
	I0717 20:19:20.553750  917496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 20:19:20.564585  917496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:19:20.575413  917496 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 20:19:20.575466  917496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:19:20.586608  917496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:19:20.586641  917496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 20:19:20.640197  917496 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 20:19:20.640617  917496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:19:20.689994  917496 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 20:19:20.690052  917496 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 20:19:20.690084  917496 kubeadm.go:322] OS: Linux
	I0717 20:19:20.690126  917496 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 20:19:20.690171  917496 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 20:19:20.690215  917496 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 20:19:20.690262  917496 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 20:19:20.690307  917496 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 20:19:20.690353  917496 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 20:19:20.690394  917496 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 20:19:20.690438  917496 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 20:19:20.690481  917496 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 20:19:20.771017  917496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:19:20.771117  917496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:19:20.771204  917496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:19:21.028955  917496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:19:21.031959  917496 out.go:204]   - Generating certificates and keys ...
	I0717 20:19:21.032122  917496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:19:21.032187  917496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:19:21.472253  917496 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 20:19:22.137449  917496 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 20:19:22.981953  917496 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 20:19:23.252498  917496 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 20:19:23.707004  917496 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 20:19:23.707402  917496 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [dockerenv-035354 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 20:19:24.454203  917496 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 20:19:24.454437  917496 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-035354 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 20:19:25.133042  917496 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 20:19:25.485965  917496 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 20:19:25.763913  917496 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 20:19:25.764186  917496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:19:26.213125  917496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:19:26.876894  917496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:19:27.134645  917496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:19:27.588396  917496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:19:27.606066  917496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:19:27.608096  917496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:19:27.608596  917496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:19:27.716372  917496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:19:27.718999  917496 out.go:204]   - Booting up control plane ...
	I0717 20:19:27.719110  917496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:19:27.720150  917496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:19:27.721760  917496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:19:27.723103  917496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:19:27.728017  917496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:19:35.734939  917496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006382 seconds
	I0717 20:19:35.735070  917496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:19:35.750465  917496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:19:36.276472  917496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:19:36.277003  917496 kubeadm.go:322] [mark-control-plane] Marking the node dockerenv-035354 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 20:19:36.797153  917496 kubeadm.go:322] [bootstrap-token] Using token: 9mzug1.v2vgutn8424d7gxc
	I0717 20:19:36.800399  917496 out.go:204]   - Configuring RBAC rules ...
	I0717 20:19:36.800518  917496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:19:36.806704  917496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:19:36.817076  917496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:19:36.822865  917496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:19:36.827821  917496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:19:36.834380  917496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:19:36.850149  917496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:19:37.124095  917496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:19:37.212236  917496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:19:37.213819  917496 kubeadm.go:322] 
	I0717 20:19:37.213881  917496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:19:37.213886  917496 kubeadm.go:322] 
	I0717 20:19:37.213957  917496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:19:37.213961  917496 kubeadm.go:322] 
	I0717 20:19:37.213985  917496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:19:37.214433  917496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:19:37.214484  917496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:19:37.214488  917496 kubeadm.go:322] 
	I0717 20:19:37.214538  917496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 20:19:37.214542  917496 kubeadm.go:322] 
	I0717 20:19:37.214588  917496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 20:19:37.214591  917496 kubeadm.go:322] 
	I0717 20:19:37.214639  917496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:19:37.214709  917496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:19:37.214773  917496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:19:37.214776  917496 kubeadm.go:322] 
	I0717 20:19:37.215048  917496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:19:37.215123  917496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:19:37.215127  917496 kubeadm.go:322] 
	I0717 20:19:37.215420  917496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9mzug1.v2vgutn8424d7gxc \
	I0717 20:19:37.215531  917496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c9abf7e3fe1433b2ee101b0228035f8f2ed73026247b5c044e3b341ae3a2cc41 \
	I0717 20:19:37.215750  917496 kubeadm.go:322] 	--control-plane 
	I0717 20:19:37.215757  917496 kubeadm.go:322] 
	I0717 20:19:37.216008  917496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:19:37.216014  917496 kubeadm.go:322] 
	I0717 20:19:37.216270  917496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9mzug1.v2vgutn8424d7gxc \
	I0717 20:19:37.216529  917496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c9abf7e3fe1433b2ee101b0228035f8f2ed73026247b5c044e3b341ae3a2cc41 
	I0717 20:19:37.228355  917496 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 20:19:37.228461  917496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:19:37.228475  917496 cni.go:84] Creating CNI manager for ""
	I0717 20:19:37.228485  917496 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:19:37.231056  917496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 20:19:37.233022  917496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 20:19:37.249075  917496 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 20:19:37.249087  917496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 20:19:37.283897  917496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 20:19:38.296771  917496 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.012836144s)
	I0717 20:19:38.296804  917496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:19:38.296905  917496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:38.296968  917496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=dockerenv-035354 minikube.k8s.io/updated_at=2023_07_17T20_19_38_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:19:38.465185  917496 kubeadm.go:1081] duration metric: took 168.358494ms to wait for elevateKubeSystemPrivileges.
	I0717 20:19:38.465211  917496 ops.go:34] apiserver oom_adj: -16
	I0717 20:19:38.465384  917496 kubeadm.go:406] StartCluster complete in 17.954688135s
	I0717 20:19:38.465402  917496 settings.go:142] acquiring lock: {Name:mk07e0d8498fadd24504785e1ba3db0cfccaf251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:38.465468  917496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:19:38.466170  917496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/kubeconfig: {Name:mk933d9b210c77bbf248211a6ac799f4302f2fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:19:38.468315  917496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:19:38.468606  917496 config.go:182] Loaded profile config "dockerenv-035354": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:19:38.468639  917496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:19:38.468700  917496 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-035354"
	I0717 20:19:38.468713  917496 addons.go:231] Setting addon storage-provisioner=true in "dockerenv-035354"
	I0717 20:19:38.468766  917496 host.go:66] Checking if "dockerenv-035354" exists ...
	I0717 20:19:38.469316  917496 cli_runner.go:164] Run: docker container inspect dockerenv-035354 --format={{.State.Status}}
	I0717 20:19:38.469390  917496 addons.go:69] Setting default-storageclass=true in profile "dockerenv-035354"
	I0717 20:19:38.469406  917496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-035354"
	I0717 20:19:38.469681  917496 cli_runner.go:164] Run: docker container inspect dockerenv-035354 --format={{.State.Status}}
	I0717 20:19:38.535808  917496 addons.go:231] Setting addon default-storageclass=true in "dockerenv-035354"
	I0717 20:19:38.535840  917496 host.go:66] Checking if "dockerenv-035354" exists ...
	I0717 20:19:38.536287  917496 cli_runner.go:164] Run: docker container inspect dockerenv-035354 --format={{.State.Status}}
	I0717 20:19:38.546208  917496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:19:38.548042  917496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:19:38.548054  917496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:19:38.548116  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:38.570422  917496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:19:38.570434  917496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:19:38.570516  917496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-035354
	I0717 20:19:38.602277  917496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa Username:docker}
	I0717 20:19:38.626532  917496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33720 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/dockerenv-035354/id_rsa Username:docker}
	I0717 20:19:38.632240  917496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:19:38.767937  917496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:19:38.804316  917496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:19:39.002060  917496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "dockerenv-035354" context rescaled to 1 replicas
	I0717 20:19:39.002096  917496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 20:19:39.004692  917496 out.go:177] * Verifying Kubernetes components...
	I0717 20:19:39.007021  917496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:19:39.189789  917496 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 20:19:39.457168  917496 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 20:19:39.455488  917496 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:19:39.457308  917496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:19:39.459195  917496 addons.go:502] enable addons completed in 990.543972ms: enabled=[storage-provisioner default-storageclass]
	I0717 20:19:39.477836  917496 api_server.go:72] duration metric: took 475.700931ms to wait for apiserver process to appear ...
	I0717 20:19:39.477849  917496 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:19:39.477865  917496 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 20:19:39.488301  917496 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 20:19:39.489666  917496 api_server.go:141] control plane version: v1.27.3
	I0717 20:19:39.489681  917496 api_server.go:131] duration metric: took 11.826131ms to wait for apiserver health ...
	I0717 20:19:39.489687  917496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:19:39.496249  917496 system_pods.go:59] 5 kube-system pods found
	I0717 20:19:39.496268  917496 system_pods.go:61] "etcd-dockerenv-035354" [ca4b9bc4-9e68-40b4-ae01-68cc5efb485f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 20:19:39.496276  917496 system_pods.go:61] "kube-apiserver-dockerenv-035354" [bebeb796-50c9-472f-9b90-6939ae375ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 20:19:39.496284  917496 system_pods.go:61] "kube-controller-manager-dockerenv-035354" [4cb66903-4a08-47c7-829c-49b4ad055802] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 20:19:39.496291  917496 system_pods.go:61] "kube-scheduler-dockerenv-035354" [d81396fd-4e70-477b-aff6-7f5fd42f8308] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 20:19:39.496297  917496 system_pods.go:61] "storage-provisioner" [046dd970-4957-4120-8f26-864b522215d8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 20:19:39.496303  917496 system_pods.go:74] duration metric: took 6.610933ms to wait for pod list to return data ...
	I0717 20:19:39.496312  917496 kubeadm.go:581] duration metric: took 494.184141ms to wait for : map[apiserver:true system_pods:true] ...
	I0717 20:19:39.496323  917496 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:19:39.499763  917496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 20:19:39.499780  917496 node_conditions.go:123] node cpu capacity is 2
	I0717 20:19:39.499791  917496 node_conditions.go:105] duration metric: took 3.463325ms to run NodePressure ...
	I0717 20:19:39.499801  917496 start.go:228] waiting for startup goroutines ...
	I0717 20:19:39.499806  917496 start.go:233] waiting for cluster config update ...
	I0717 20:19:39.499815  917496 start.go:242] writing updated cluster config ...
	I0717 20:19:39.500094  917496 ssh_runner.go:195] Run: rm -f paused
	I0717 20:19:39.572987  917496 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 20:19:39.574893  917496 out.go:177] * Done! kubectl is now configured to use "dockerenv-035354" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43cd940ec3b0a       fb73e92641fd5       1 second ago        Running             kube-proxy                0                   8b76b79bd3594       kube-proxy-ht6cm
	62d3dd28ed184       b18bf71b941ba       2 seconds ago       Running             kindnet-cni               0                   a7b9b841a5fc8       kindnet-qmwqm
	d5deb6102baaf       ba04bb24b9575       2 seconds ago       Running             storage-provisioner       0                   13e246388c961       storage-provisioner
	6401a6a80f277       39dfb036b0986       22 seconds ago      Running             kube-apiserver            0                   1ccc9624bd3ee       kube-apiserver-dockerenv-035354
	3d2395acf9a2a       ab3683b584ae5       22 seconds ago      Running             kube-controller-manager   0                   715216ca2998d       kube-controller-manager-dockerenv-035354
	f47623ccd0201       bcb9e554eaab6       22 seconds ago      Running             kube-scheduler            0                   9060ee00cb266       kube-scheduler-dockerenv-035354
	ab5ec28a6f154       24bc64e911039       22 seconds ago      Running             etcd                      0                   90096ae8c1ecb       etcd-dockerenv-035354
	
	* 
	* ==> containerd <==
	* Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.079084084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.079192015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.079509890Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13e246388c961deb1ddfaf45964bbcfbd8b450d0329a5a9a6678bb01ef60c352 pid=1472 runtime=io.containerd.runc.v2
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.221218726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:046dd970-4957-4120-8f26-864b522215d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"13e246388c961deb1ddfaf45964bbcfbd8b450d0329a5a9a6678bb01ef60c352\""
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.232506091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-qmwqm,Uid:24b2d311-e767-4574-b9b0-933fb43cd150,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7b9b841a5fc8a1fcf5cea4b4ef449e2d49f2e093a8c26c88cd7843202942ee3\""
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.240150479Z" level=info msg="CreateContainer within sandbox \"13e246388c961deb1ddfaf45964bbcfbd8b450d0329a5a9a6678bb01ef60c352\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.248995370Z" level=info msg="CreateContainer within sandbox \"a7b9b841a5fc8a1fcf5cea4b4ef449e2d49f2e093a8c26c88cd7843202942ee3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.260544923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-45hgd,Uid:c7ffad58-c69a-480f-8dbf-3f9155976259,Namespace:kube-system,Attempt:0,}"
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.281107810Z" level=info msg="CreateContainer within sandbox \"13e246388c961deb1ddfaf45964bbcfbd8b450d0329a5a9a6678bb01ef60c352\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d5deb6102baaf682e4d1131f2d8b3921a8562d54d84565c850de085707c0795e\""
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.288917540Z" level=info msg="StartContainer for \"d5deb6102baaf682e4d1131f2d8b3921a8562d54d84565c850de085707c0795e\""
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.293131669Z" level=info msg="CreateContainer within sandbox \"a7b9b841a5fc8a1fcf5cea4b4ef449e2d49f2e093a8c26c88cd7843202942ee3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"62d3dd28ed184213c549182dfd39b059960b4470f86954e4cd05ddd6315c65e1\""
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.299343693Z" level=info msg="StartContainer for \"62d3dd28ed184213c549182dfd39b059960b4470f86954e4cd05ddd6315c65e1\""
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.396359782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-45hgd,Uid:c7ffad58-c69a-480f-8dbf-3f9155976259,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\": failed to find network info for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\""
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.449179880Z" level=info msg="StartContainer for \"d5deb6102baaf682e4d1131f2d8b3921a8562d54d84565c850de085707c0795e\" returns successfully"
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.469622972Z" level=info msg="StartContainer for \"62d3dd28ed184213c549182dfd39b059960b4470f86954e4cd05ddd6315c65e1\" returns successfully"
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.946502218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht6cm,Uid:5702cc40-b0ad-4c44-aff6-5188672e9a99,Namespace:kube-system,Attempt:0,}"
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.978098735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.978705612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.979340862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 17 20:19:50 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:50.979755649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b76b79bd359480563efeaf1e2485a54f215138110e955e35ceaec7b9c8fdd06 pid=1717 runtime=io.containerd.runc.v2
	Jul 17 20:19:51 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:51.055200775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht6cm,Uid:5702cc40-b0ad-4c44-aff6-5188672e9a99,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b76b79bd359480563efeaf1e2485a54f215138110e955e35ceaec7b9c8fdd06\""
	Jul 17 20:19:51 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:51.063550321Z" level=info msg="CreateContainer within sandbox \"8b76b79bd359480563efeaf1e2485a54f215138110e955e35ceaec7b9c8fdd06\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jul 17 20:19:51 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:51.088378292Z" level=info msg="CreateContainer within sandbox \"8b76b79bd359480563efeaf1e2485a54f215138110e955e35ceaec7b9c8fdd06\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43cd940ec3b0aa0f1a276c70f61fd36a96303e22fc343534e323378501516471\""
	Jul 17 20:19:51 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:51.093942380Z" level=info msg="StartContainer for \"43cd940ec3b0aa0f1a276c70f61fd36a96303e22fc343534e323378501516471\""
	Jul 17 20:19:51 dockerenv-035354 containerd[741]: time="2023-07-17T20:19:51.197304541Z" level=info msg="StartContainer for \"43cd940ec3b0aa0f1a276c70f61fd36a96303e22fc343534e323378501516471\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               dockerenv-035354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=dockerenv-035354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=dockerenv-035354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T20_19_38_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:19:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-035354
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:19:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:19:37 +0000   Mon, 17 Jul 2023 20:19:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:19:37 +0000   Mon, 17 Jul 2023 20:19:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:19:37 +0000   Mon, 17 Jul 2023 20:19:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:19:37 +0000   Mon, 17 Jul 2023 20:19:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-035354
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 7879618d3ce140f0a252808b427b0833
	  System UUID:                40f7d6df-de12-4e94-9d05-d6a066d75955
	  Boot ID:                    cbdc664b-32f3-4468-95d3-fdbd4fe2a3f0
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-45hgd                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3s
	  kube-system                 etcd-dockerenv-035354                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         15s
	  kube-system                 kindnet-qmwqm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3s
	  kube-system                 kube-apiserver-dockerenv-035354             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  kube-system                 kube-controller-manager-dockerenv-035354    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 kube-proxy-ht6cm                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-dockerenv-035354             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node dockerenv-035354 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node dockerenv-035354 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node dockerenv-035354 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15s                kubelet          Node dockerenv-035354 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s                kubelet          Node dockerenv-035354 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s                kubelet          Node dockerenv-035354 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15s                kubelet          Node dockerenv-035354 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15s                kubelet          Node dockerenv-035354 status is now: NodeReady
	  Normal  RegisteredNode           4s                 node-controller  Node dockerenv-035354 event: Registered Node dockerenv-035354 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001063] FS-Cache: O-key=[8] '7f70ed0000000000'
	[  +0.000727] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=000000000634e0be
	[  +0.001055] FS-Cache: N-key=[8] '7f70ed0000000000'
	[  +0.003218] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=00000029 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001023] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=000000009d0a711b
	[  +0.001085] FS-Cache: O-key=[8] '7f70ed0000000000'
	[  +0.000701] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000f945483a
	[  +0.001066] FS-Cache: N-key=[8] '7f70ed0000000000'
	[  +2.901901] FS-Cache: Duplicate cookie detected
	[  +0.000746] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=0000000032dcb531
	[  +0.001173] FS-Cache: O-key=[8] '7e70ed0000000000'
	[  +0.000773] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001012] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=000000000634e0be
	[  +0.001123] FS-Cache: N-key=[8] '7e70ed0000000000'
	[  +0.313793] FS-Cache: Duplicate cookie detected
	[  +0.000747] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000963] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000a176162d
	[  +0.001112] FS-Cache: O-key=[8] '8470ed0000000000'
	[  +0.000713] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=0000000099715709
	[  +0.001033] FS-Cache: N-key=[8] '8470ed0000000000'
	
	* 
	* ==> etcd [ab5ec28a6f1543f9005645cc4ee5ae80c1cccba225ee107aa4af5e9123244f06] <==
	* {"level":"info","ts":"2023-07-17T20:19:29.950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-07-17T20:19:29.951Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-07-17T20:19:29.954Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T20:19:29.954Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-17T20:19:29.957Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-07-17T20:19:29.958Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T20:19:29.958Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T20:19:30.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T20:19:30.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T20:19:30.608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-07-17T20:19:30.609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T20:19:30.609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T20:19:30.609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-07-17T20:19:30.609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-07-17T20:19:30.613Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-035354 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T20:19:30.613Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:19:30.614Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T20:19:30.614Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:19:30.616Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:19:30.617Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:19:30.617Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:19:30.617Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:19:30.617Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T20:19:30.617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T20:19:30.618Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:19:52 up  4:02,  0 users,  load average: 2.39, 2.04, 2.20
	Linux dockerenv-035354 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [62d3dd28ed184213c549182dfd39b059960b4470f86954e4cd05ddd6315c65e1] <==
	* I0717 20:19:50.541838       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 20:19:50.541951       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0717 20:19:50.542076       1 main.go:116] setting mtu 1500 for CNI 
	I0717 20:19:50.542091       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 20:19:50.542104       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [6401a6a80f2779fb22e35641cd3f8320b4faf593ded6d952910bce20bd240197] <==
	* I0717 20:19:34.069109       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 20:19:34.069252       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 20:19:34.069333       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 20:19:34.069906       1 aggregator.go:152] initial CRD sync complete...
	I0717 20:19:34.070045       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 20:19:34.070130       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 20:19:34.070258       1 cache.go:39] Caches are synced for autoregister controller
	I0717 20:19:34.074354       1 controller.go:624] quota admission added evaluator for: namespaces
	I0717 20:19:34.152162       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 20:19:34.599575       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 20:19:34.905334       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 20:19:34.910754       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 20:19:34.910780       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 20:19:35.588404       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 20:19:35.631553       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 20:19:35.775540       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 20:19:35.782264       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 20:19:35.783551       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 20:19:35.788817       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 20:19:36.041956       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 20:19:37.108054       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 20:19:37.122779       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 20:19:37.134673       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 20:19:49.651560       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0717 20:19:49.823196       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [3d2395acf9a2abd562a26e5699e773ff43e629f7a50225a8f9610626f15c412d] <==
	* I0717 20:19:48.913496       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0717 20:19:48.913583       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0717 20:19:48.913663       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0717 20:19:48.920343       1 shared_informer.go:318] Caches are synced for GC
	I0717 20:19:48.924798       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 20:19:48.925141       1 range_allocator.go:380] "Set node PodCIDR" node="dockerenv-035354" podCIDRs=[10.244.0.0/24]
	I0717 20:19:48.933433       1 shared_informer.go:318] Caches are synced for PVC protection
	I0717 20:19:48.936705       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 20:19:48.937191       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0717 20:19:48.942363       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 20:19:48.957994       1 shared_informer.go:318] Caches are synced for namespace
	I0717 20:19:48.977209       1 shared_informer.go:318] Caches are synced for stateful set
	I0717 20:19:48.993184       1 shared_informer.go:318] Caches are synced for service account
	I0717 20:19:49.003420       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 20:19:49.023541       1 shared_informer.go:318] Caches are synced for disruption
	I0717 20:19:49.037254       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0717 20:19:49.043116       1 shared_informer.go:318] Caches are synced for deployment
	I0717 20:19:49.052037       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 20:19:49.454638       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 20:19:49.480882       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 20:19:49.481036       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 20:19:49.675682       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ht6cm"
	I0717 20:19:49.688162       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qmwqm"
	I0717 20:19:49.843368       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1"
	I0717 20:19:49.920982       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-45hgd"
	
	* 
	* ==> kube-proxy [43cd940ec3b0aa0f1a276c70f61fd36a96303e22fc343534e323378501516471] <==
	* I0717 20:19:51.279134       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0717 20:19:51.279994       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0717 20:19:51.280171       1 server_others.go:554] "Using iptables proxy"
	I0717 20:19:51.326154       1 server_others.go:192] "Using iptables Proxier"
	I0717 20:19:51.326358       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 20:19:51.326455       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 20:19:51.326540       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 20:19:51.326685       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 20:19:51.327356       1 server.go:658] "Version info" version="v1.27.3"
	I0717 20:19:51.327711       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 20:19:51.328580       1 config.go:188] "Starting service config controller"
	I0717 20:19:51.328771       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 20:19:51.328929       1 config.go:97] "Starting endpoint slice config controller"
	I0717 20:19:51.329010       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 20:19:51.329707       1 config.go:315] "Starting node config controller"
	I0717 20:19:51.329817       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 20:19:51.429886       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 20:19:51.429942       1 shared_informer.go:318] Caches are synced for node config
	I0717 20:19:51.429955       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [f47623ccd0201c7365246db1e598dd073fe470605f6c1dcc6bb1ef06dddb1ee4] <==
	* E0717 20:19:34.192964       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:19:34.193053       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:19:34.193139       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 20:19:34.193258       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 20:19:35.052621       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:19:35.052676       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 20:19:35.117897       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:19:35.117942       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 20:19:35.134896       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:19:35.134935       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 20:19:35.207365       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 20:19:35.207616       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 20:19:35.211449       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 20:19:35.211718       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 20:19:35.237900       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:19:35.238162       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 20:19:35.289264       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:19:35.289542       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 20:19:35.308613       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:19:35.308842       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 20:19:35.367194       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 20:19:35.367230       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 20:19:35.382844       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:19:35.382916       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0717 20:19:37.965076       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.214064    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dnp8\" (UniqueName: \"kubernetes.io/projected/046dd970-4957-4120-8f26-864b522215d8-kube-api-access-8dnp8\") pod \"storage-provisioner\" (UID: \"046dd970-4957-4120-8f26-864b522215d8\") " pod="kube-system/storage-provisioner"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: E0717 20:19:49.325517    1343 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: E0717 20:19:49.325553    1343 projected.go:198] Error preparing data for projected volume kube-api-access-8dnp8 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: E0717 20:19:49.325633    1343 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/046dd970-4957-4120-8f26-864b522215d8-kube-api-access-8dnp8 podName:046dd970-4957-4120-8f26-864b522215d8 nodeName:}" failed. No retries permitted until 2023-07-17 20:19:49.825609772 +0000 UTC m=+12.753515372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dnp8" (UniqueName: "kubernetes.io/projected/046dd970-4957-4120-8f26-864b522215d8-kube-api-access-8dnp8") pod "storage-provisioner" (UID: "046dd970-4957-4120-8f26-864b522215d8") : configmap "kube-root-ca.crt" not found
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.706037    1343 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.734893    1343 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: W0717 20:19:49.741527    1343 reflector.go:533] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:dockerenv-035354" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'dockerenv-035354' and this object
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: E0717 20:19:49.741571    1343 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:dockerenv-035354" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'dockerenv-035354' and this object
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.817645    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/24b2d311-e767-4574-b9b0-933fb43cd150-cni-cfg\") pod \"kindnet-qmwqm\" (UID: \"24b2d311-e767-4574-b9b0-933fb43cd150\") " pod="kube-system/kindnet-qmwqm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.817717    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24b2d311-e767-4574-b9b0-933fb43cd150-xtables-lock\") pod \"kindnet-qmwqm\" (UID: \"24b2d311-e767-4574-b9b0-933fb43cd150\") " pod="kube-system/kindnet-qmwqm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.817744    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24b2d311-e767-4574-b9b0-933fb43cd150-lib-modules\") pod \"kindnet-qmwqm\" (UID: \"24b2d311-e767-4574-b9b0-933fb43cd150\") " pod="kube-system/kindnet-qmwqm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.817809    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxllk\" (UniqueName: \"kubernetes.io/projected/24b2d311-e767-4574-b9b0-933fb43cd150-kube-api-access-hxllk\") pod \"kindnet-qmwqm\" (UID: \"24b2d311-e767-4574-b9b0-933fb43cd150\") " pod="kube-system/kindnet-qmwqm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.919116    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5702cc40-b0ad-4c44-aff6-5188672e9a99-xtables-lock\") pod \"kube-proxy-ht6cm\" (UID: \"5702cc40-b0ad-4c44-aff6-5188672e9a99\") " pod="kube-system/kube-proxy-ht6cm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.920038    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v452\" (UniqueName: \"kubernetes.io/projected/5702cc40-b0ad-4c44-aff6-5188672e9a99-kube-api-access-6v452\") pod \"kube-proxy-ht6cm\" (UID: \"5702cc40-b0ad-4c44-aff6-5188672e9a99\") " pod="kube-system/kube-proxy-ht6cm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.920150    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5702cc40-b0ad-4c44-aff6-5188672e9a99-kube-proxy\") pod \"kube-proxy-ht6cm\" (UID: \"5702cc40-b0ad-4c44-aff6-5188672e9a99\") " pod="kube-system/kube-proxy-ht6cm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.920193    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5702cc40-b0ad-4c44-aff6-5188672e9a99-lib-modules\") pod \"kube-proxy-ht6cm\" (UID: \"5702cc40-b0ad-4c44-aff6-5188672e9a99\") " pod="kube-system/kube-proxy-ht6cm"
	Jul 17 20:19:49 dockerenv-035354 kubelet[1343]: I0717 20:19:49.935601    1343 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 20:19:50 dockerenv-035354 kubelet[1343]: I0717 20:19:50.120751    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tblz\" (UniqueName: \"kubernetes.io/projected/c7ffad58-c69a-480f-8dbf-3f9155976259-kube-api-access-4tblz\") pod \"coredns-5d78c9869d-45hgd\" (UID: \"c7ffad58-c69a-480f-8dbf-3f9155976259\") " pod="kube-system/coredns-5d78c9869d-45hgd"
	Jul 17 20:19:50 dockerenv-035354 kubelet[1343]: I0717 20:19:50.120847    1343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7ffad58-c69a-480f-8dbf-3f9155976259-config-volume\") pod \"coredns-5d78c9869d-45hgd\" (UID: \"c7ffad58-c69a-480f-8dbf-3f9155976259\") " pod="kube-system/coredns-5d78c9869d-45hgd"
	Jul 17 20:19:50 dockerenv-035354 kubelet[1343]: E0717 20:19:50.397318    1343 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\": failed to find network info for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\""
	Jul 17 20:19:50 dockerenv-035354 kubelet[1343]: E0717 20:19:50.397395    1343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\": failed to find network info for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\"" pod="kube-system/coredns-5d78c9869d-45hgd"
	Jul 17 20:19:50 dockerenv-035354 kubelet[1343]: E0717 20:19:50.397429    1343 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\": failed to find network info for sandbox \"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\"" pod="kube-system/coredns-5d78c9869d-45hgd"
	Jul 17 20:19:50 dockerenv-035354 kubelet[1343]: E0717 20:19:50.397507    1343 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5d78c9869d-45hgd_kube-system(c7ffad58-c69a-480f-8dbf-3f9155976259)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5d78c9869d-45hgd_kube-system(c7ffad58-c69a-480f-8dbf-3f9155976259)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\\\": failed to find network info for sandbox \\\"f901e0e0d57fc28759eefc93f5cbcaa7f0ae912d702ae204d04d1e0c28a5ba08\\\"\"" pod="kube-system/coredns-5d78c9869d-45hgd" podUID=c7ffad58-c69a-480f-8dbf-3f9155976259
	Jul 17 20:19:51 dockerenv-035354 kubelet[1343]: I0717 20:19:51.450014    1343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.449974577 podCreationTimestamp="2023-07-17 20:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 20:19:51.449862248 +0000 UTC m=+14.377767840" watchObservedRunningTime="2023-07-17 20:19:51.449974577 +0000 UTC m=+14.377880177"
	Jul 17 20:19:51 dockerenv-035354 kubelet[1343]: I0717 20:19:51.492589    1343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-qmwqm" podStartSLOduration=2.492535783 podCreationTimestamp="2023-07-17 20:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 20:19:51.477692899 +0000 UTC m=+14.405598491" watchObservedRunningTime="2023-07-17 20:19:51.492535783 +0000 UTC m=+14.420441391"
	
	* 
	* ==> storage-provisioner [d5deb6102baaf682e4d1131f2d8b3921a8562d54d84565c850de085707c0795e] <==
	* I0717 20:19:50.470797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p dockerenv-035354 -n dockerenv-035354
helpers_test.go:261: (dbg) Run:  kubectl --context dockerenv-035354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-5d78c9869d-45hgd
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context dockerenv-035354 describe pod coredns-5d78c9869d-45hgd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-035354 describe pod coredns-5d78c9869d-45hgd: exit status 1 (97.738652ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5d78c9869d-45hgd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-035354 describe pod coredns-5d78c9869d-45hgd: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-035354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-035354
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-035354: (2.017269228s)
--- FAIL: TestDockerEnvContainerd (46.25s)

                                                
                                    
x
+
TestErrorSpam/setup (29.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-858640 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-858640 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-858640 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-858640 --driver=docker  --container-runtime=containerd: (29.984629444s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1"
error_spam_test.go:110: minikube stdout:
* [nospam-858640] minikube v1.30.1 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=16890
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node nospam-858640 in cluster nospam-858640
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-858640" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
--- FAIL: TestErrorSpam/setup (29.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image load --daemon gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 image load --daemon gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr: (3.517044721s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-949323" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image load --daemon gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 image load --daemon gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr: (3.250073628s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-949323" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.733411582s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-949323
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image load --daemon gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 image load --daemon gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr: (3.677487315s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-949323" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image save gcr.io/google-containers/addon-resizer:functional-949323 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0717 20:23:24.479386  932318 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:23:24.480260  932318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:23:24.480303  932318 out.go:309] Setting ErrFile to fd 2...
	I0717 20:23:24.480324  932318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:23:24.480656  932318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:23:24.481335  932318 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:23:24.481597  932318 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:23:24.482114  932318 cli_runner.go:164] Run: docker container inspect functional-949323 --format={{.State.Status}}
	I0717 20:23:24.505669  932318 ssh_runner.go:195] Run: systemctl --version
	I0717 20:23:24.505763  932318 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-949323
	I0717 20:23:24.532367  932318 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33730 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/functional-949323/id_rsa Username:docker}
	I0717 20:23:24.634899  932318 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0717 20:23:24.634950  932318 cache_images.go:254] Failed to load cached images for profile functional-949323. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0717 20:23:24.634980  932318 cache_images.go:262] succeeded pushing to: 
	I0717 20:23:24.634986  932318 cache_images.go:263] failed pushing to: functional-949323

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (55.91s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-786531 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-786531 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.045185266s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-786531 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-786531 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0cb7946e-d951-438c-9536-f1168eeabdd7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0cb7946e-d951-438c-9536-f1168eeabdd7] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.008974448s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-786531 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-786531 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-786531 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009073811s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-786531 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-786531 addons disable ingress-dns --alsologtostderr -v=1: (6.436529441s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-786531 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-786531 addons disable ingress --alsologtostderr -v=1: (7.582067207s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-786531
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-786531:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ecb086dfea03345bf2c913f01091fd937fd17b868c6b3ea2bfdcbb3e77e27167",
	        "Created": "2023-07-17T20:24:27.277113176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 936589,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:24:27.618887115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/ecb086dfea03345bf2c913f01091fd937fd17b868c6b3ea2bfdcbb3e77e27167/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ecb086dfea03345bf2c913f01091fd937fd17b868c6b3ea2bfdcbb3e77e27167/hostname",
	        "HostsPath": "/var/lib/docker/containers/ecb086dfea03345bf2c913f01091fd937fd17b868c6b3ea2bfdcbb3e77e27167/hosts",
	        "LogPath": "/var/lib/docker/containers/ecb086dfea03345bf2c913f01091fd937fd17b868c6b3ea2bfdcbb3e77e27167/ecb086dfea03345bf2c913f01091fd937fd17b868c6b3ea2bfdcbb3e77e27167-json.log",
	        "Name": "/ingress-addon-legacy-786531",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-786531:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-786531",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/349f9362e6640fe51a97dd6aa6a5999f18e3119235bdd189602d3057a1889cb7-init/diff:/var/lib/docker/overlay2/7007f4a8945aebd939b8429923b1b654b284bda949467104beab22408cb6f264/diff",
	                "MergedDir": "/var/lib/docker/overlay2/349f9362e6640fe51a97dd6aa6a5999f18e3119235bdd189602d3057a1889cb7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/349f9362e6640fe51a97dd6aa6a5999f18e3119235bdd189602d3057a1889cb7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/349f9362e6640fe51a97dd6aa6a5999f18e3119235bdd189602d3057a1889cb7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-786531",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-786531/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-786531",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-786531",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-786531",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "474f7adf1b2e49aa297e5b84a93c6b0ba308aebd4d56a50be5c0e3588413ffa8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33735"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33731"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33733"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33732"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/474f7adf1b2e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-786531": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ecb086dfea03",
	                        "ingress-addon-legacy-786531"
	                    ],
	                    "NetworkID": "9e0fc434af34c6739a2ceb1df48d75ddac8fed0a0d4baec487abe958d210036d",
	                    "EndpointID": "d13b6344d4d8614867aff30b807ba909a3edefc7c9222d82437d71ddb979bd26",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-786531 -n ingress-addon-legacy-786531
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-786531 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-786531 logs -n 25: (1.42283657s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-949323                                                  | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| mount          | -p functional-949323                                                  | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| mount          | -p functional-949323                                                  | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| ssh            | functional-949323 ssh findmnt                                         | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | -T /mount1                                                            |                             |         |         |                     |                     |
	| ssh            | functional-949323 ssh findmnt                                         | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | -T /mount2                                                            |                             |         |         |                     |                     |
	| ssh            | functional-949323 ssh findmnt                                         | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | -T /mount3                                                            |                             |         |         |                     |                     |
	| mount          | -p functional-949323                                                  | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC |                     |
	|                | --kill=true                                                           |                             |         |         |                     |                     |
	| update-context | functional-949323                                                     | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-949323                                                     | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-949323                                                     | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| image          | functional-949323                                                     | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | image ls --format short                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-949323                                                     | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:23 UTC |
	|                | image ls --format yaml                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| ssh            | functional-949323 ssh pgrep                                           | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC |                     |
	|                | buildkitd                                                             |                             |         |         |                     |                     |
	| image          | functional-949323 image build -t                                      | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:23 UTC | 17 Jul 23 20:24 UTC |
	|                | localhost/my-image:functional-949323                                  |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                             |         |         |                     |                     |
	| image          | functional-949323 image ls                                            | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:24 UTC | 17 Jul 23 20:24 UTC |
	| image          | functional-949323                                                     | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:24 UTC | 17 Jul 23 20:24 UTC |
	|                | image ls --format json                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-949323                                                     | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:24 UTC | 17 Jul 23 20:24 UTC |
	|                | image ls --format table                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| delete         | -p functional-949323                                                  | functional-949323           | jenkins | v1.30.1 | 17 Jul 23 20:24 UTC | 17 Jul 23 20:24 UTC |
	| start          | -p ingress-addon-legacy-786531                                        | ingress-addon-legacy-786531 | jenkins | v1.30.1 | 17 Jul 23 20:24 UTC | 17 Jul 23 20:25 UTC |
	|                | --kubernetes-version=v1.18.20                                         |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                  |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                        |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-786531                                           | ingress-addon-legacy-786531 | jenkins | v1.30.1 | 17 Jul 23 20:25 UTC | 17 Jul 23 20:25 UTC |
	|                | addons enable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-786531                                           | ingress-addon-legacy-786531 | jenkins | v1.30.1 | 17 Jul 23 20:25 UTC | 17 Jul 23 20:25 UTC |
	|                | addons enable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-786531                                           | ingress-addon-legacy-786531 | jenkins | v1.30.1 | 17 Jul 23 20:26 UTC | 17 Jul 23 20:26 UTC |
	|                | ssh curl -s http://127.0.0.1/                                         |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                          |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-786531 ip                                        | ingress-addon-legacy-786531 | jenkins | v1.30.1 | 17 Jul 23 20:26 UTC | 17 Jul 23 20:26 UTC |
	| addons         | ingress-addon-legacy-786531                                           | ingress-addon-legacy-786531 | jenkins | v1.30.1 | 17 Jul 23 20:26 UTC | 17 Jul 23 20:26 UTC |
	|                | addons disable ingress-dns                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-786531                                           | ingress-addon-legacy-786531 | jenkins | v1.30.1 | 17 Jul 23 20:26 UTC | 17 Jul 23 20:26 UTC |
	|                | addons disable ingress                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:24:06
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:24:06.716947  936132 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:24:06.717150  936132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:24:06.717181  936132 out.go:309] Setting ErrFile to fd 2...
	I0717 20:24:06.717202  936132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:24:06.717511  936132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:24:06.717970  936132 out.go:303] Setting JSON to false
	I0717 20:24:06.718979  936132 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14794,"bootTime":1689610653,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:24:06.719084  936132 start.go:138] virtualization:  
	I0717 20:24:06.722034  936132 out.go:177] * [ingress-addon-legacy-786531] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:24:06.724383  936132 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:24:06.724476  936132 notify.go:220] Checking for updates...
	I0717 20:24:06.729499  936132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:24:06.731421  936132 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:24:06.733477  936132 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:24:06.735665  936132 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:24:06.737750  936132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:24:06.740220  936132 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:24:06.766380  936132 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:24:06.766482  936132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:24:06.855972  936132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 20:24:06.845692558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:24:06.856119  936132 docker.go:294] overlay module found
	I0717 20:24:06.859720  936132 out.go:177] * Using the docker driver based on user configuration
	I0717 20:24:06.861737  936132 start.go:298] selected driver: docker
	I0717 20:24:06.861759  936132 start.go:880] validating driver "docker" against <nil>
	I0717 20:24:06.861773  936132 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:24:06.862518  936132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:24:06.934730  936132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 20:24:06.924188904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:24:06.934886  936132 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:24:06.935110  936132 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 20:24:06.936927  936132 out.go:177] * Using Docker driver with root privileges
	I0717 20:24:06.939253  936132 cni.go:84] Creating CNI manager for ""
	I0717 20:24:06.939272  936132 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:24:06.939288  936132 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 20:24:06.939299  936132 start_flags.go:319] config:
	{Name:ingress-addon-legacy-786531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-786531 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:24:06.941428  936132 out.go:177] * Starting control plane node ingress-addon-legacy-786531 in cluster ingress-addon-legacy-786531
	I0717 20:24:06.943535  936132 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 20:24:06.945341  936132 out.go:177] * Pulling base image ...
	I0717 20:24:06.947213  936132 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0717 20:24:06.947287  936132 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 20:24:06.964725  936132 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 20:24:06.964769  936132 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 20:24:07.020630  936132 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0717 20:24:07.020662  936132 cache.go:57] Caching tarball of preloaded images
	I0717 20:24:07.020825  936132 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0717 20:24:07.024305  936132 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 20:24:07.026355  936132 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:24:07.154729  936132 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0717 20:24:19.430231  936132 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:24:19.430341  936132 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:24:20.551949  936132 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0717 20:24:20.552322  936132 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/config.json ...
	I0717 20:24:20.552357  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/config.json: {Name:mk14573778d67c6729cfe25435ecf16bf4cd3ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:20.552543  936132 cache.go:195] Successfully downloaded all kic artifacts
	I0717 20:24:20.552592  936132 start.go:365] acquiring machines lock for ingress-addon-legacy-786531: {Name:mk7b631240cc7863658d0b930b1b70a1d5dc8322 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:24:20.552650  936132 start.go:369] acquired machines lock for "ingress-addon-legacy-786531" in 47.245µs
	I0717 20:24:20.552675  936132 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-786531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-786531 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 20:24:20.552748  936132 start.go:125] createHost starting for "" (driver="docker")
	I0717 20:24:20.555117  936132 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 20:24:20.555335  936132 start.go:159] libmachine.API.Create for "ingress-addon-legacy-786531" (driver="docker")
	I0717 20:24:20.555381  936132 client.go:168] LocalClient.Create starting
	I0717 20:24:20.555450  936132 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem
	I0717 20:24:20.555488  936132 main.go:141] libmachine: Decoding PEM data...
	I0717 20:24:20.555508  936132 main.go:141] libmachine: Parsing certificate...
	I0717 20:24:20.555571  936132 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem
	I0717 20:24:20.555593  936132 main.go:141] libmachine: Decoding PEM data...
	I0717 20:24:20.555607  936132 main.go:141] libmachine: Parsing certificate...
	I0717 20:24:20.555980  936132 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-786531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 20:24:20.575711  936132 cli_runner.go:211] docker network inspect ingress-addon-legacy-786531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 20:24:20.575798  936132 network_create.go:281] running [docker network inspect ingress-addon-legacy-786531] to gather additional debugging logs...
	I0717 20:24:20.575820  936132 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-786531
	W0717 20:24:20.592974  936132 cli_runner.go:211] docker network inspect ingress-addon-legacy-786531 returned with exit code 1
	I0717 20:24:20.593012  936132 network_create.go:284] error running [docker network inspect ingress-addon-legacy-786531]: docker network inspect ingress-addon-legacy-786531: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-786531 not found
	I0717 20:24:20.593027  936132 network_create.go:286] output of [docker network inspect ingress-addon-legacy-786531]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-786531 not found
	
	** /stderr **
	I0717 20:24:20.593092  936132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:24:20.611213  936132 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000131510}
	I0717 20:24:20.611250  936132 network_create.go:123] attempt to create docker network ingress-addon-legacy-786531 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 20:24:20.611308  936132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-786531 ingress-addon-legacy-786531
	I0717 20:24:20.687534  936132 network_create.go:107] docker network ingress-addon-legacy-786531 192.168.49.0/24 created
	I0717 20:24:20.687565  936132 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-786531" container
	I0717 20:24:20.687646  936132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 20:24:20.704263  936132 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-786531 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-786531 --label created_by.minikube.sigs.k8s.io=true
	I0717 20:24:20.723320  936132 oci.go:103] Successfully created a docker volume ingress-addon-legacy-786531
	I0717 20:24:20.723412  936132 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-786531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-786531 --entrypoint /usr/bin/test -v ingress-addon-legacy-786531:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 20:24:22.246233  936132 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-786531-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-786531 --entrypoint /usr/bin/test -v ingress-addon-legacy-786531:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.52277571s)
	I0717 20:24:22.246265  936132 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-786531
	I0717 20:24:22.246290  936132 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0717 20:24:22.246309  936132 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 20:24:22.246397  936132 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-786531:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 20:24:27.194683  936132 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-786531:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.948238414s)
	I0717 20:24:27.194719  936132 kic.go:199] duration metric: took 4.948407 seconds to extract preloaded images to volume
	W0717 20:24:27.194856  936132 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 20:24:27.194968  936132 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 20:24:27.260703  936132 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-786531 --name ingress-addon-legacy-786531 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-786531 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-786531 --network ingress-addon-legacy-786531 --ip 192.168.49.2 --volume ingress-addon-legacy-786531:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 20:24:27.627128  936132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-786531 --format={{.State.Running}}
	I0717 20:24:27.652720  936132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-786531 --format={{.State.Status}}
	I0717 20:24:27.681790  936132 cli_runner.go:164] Run: docker exec ingress-addon-legacy-786531 stat /var/lib/dpkg/alternatives/iptables
	I0717 20:24:27.752966  936132 oci.go:144] the created container "ingress-addon-legacy-786531" has a running status.
	I0717 20:24:27.752993  936132 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa...
	I0717 20:24:28.496793  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 20:24:28.496900  936132 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 20:24:28.525630  936132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-786531 --format={{.State.Status}}
	I0717 20:24:28.553120  936132 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 20:24:28.553140  936132 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-786531 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 20:24:28.645944  936132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-786531 --format={{.State.Status}}
	I0717 20:24:28.666324  936132 machine.go:88] provisioning docker machine ...
	I0717 20:24:28.666355  936132 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-786531"
	I0717 20:24:28.666422  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:24:28.691152  936132 main.go:141] libmachine: Using SSH client type: native
	I0717 20:24:28.691654  936132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33735 <nil> <nil>}
	I0717 20:24:28.691674  936132 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-786531 && echo "ingress-addon-legacy-786531" | sudo tee /etc/hostname
	I0717 20:24:28.864300  936132 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-786531
	
	I0717 20:24:28.864462  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:24:28.901098  936132 main.go:141] libmachine: Using SSH client type: native
	I0717 20:24:28.901536  936132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33735 <nil> <nil>}
	I0717 20:24:28.901554  936132 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-786531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-786531/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-786531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:24:29.038442  936132 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:24:29.038466  936132 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:24:29.038487  936132 ubuntu.go:177] setting up certificates
	I0717 20:24:29.038496  936132 provision.go:83] configureAuth start
	I0717 20:24:29.038557  936132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-786531
	I0717 20:24:29.056308  936132 provision.go:138] copyHostCerts
	I0717 20:24:29.056351  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:24:29.056384  936132 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem, removing ...
	I0717 20:24:29.056390  936132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:24:29.056472  936132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:24:29.056550  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:24:29.056569  936132 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem, removing ...
	I0717 20:24:29.056573  936132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:24:29.056598  936132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:24:29.056638  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:24:29.056652  936132 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem, removing ...
	I0717 20:24:29.056656  936132 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:24:29.056684  936132 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:24:29.056728  936132 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-786531 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-786531]
	I0717 20:24:30.193014  936132 provision.go:172] copyRemoteCerts
	I0717 20:24:30.193121  936132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:24:30.193171  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:24:30.213825  936132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa Username:docker}
	I0717 20:24:30.312464  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 20:24:30.312547  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:24:30.342287  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 20:24:30.342354  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 20:24:30.372582  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 20:24:30.372688  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 20:24:30.402183  936132 provision.go:86] duration metric: configureAuth took 1.36367297s
	I0717 20:24:30.402209  936132 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:24:30.402430  936132 config.go:182] Loaded profile config "ingress-addon-legacy-786531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0717 20:24:30.402438  936132 machine.go:91] provisioned docker machine in 1.736095575s
	I0717 20:24:30.402444  936132 client.go:171] LocalClient.Create took 9.847054288s
	I0717 20:24:30.402456  936132 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-786531" took 9.847121128s
	I0717 20:24:30.402464  936132 start.go:300] post-start starting for "ingress-addon-legacy-786531" (driver="docker")
	I0717 20:24:30.402471  936132 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:24:30.402527  936132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:24:30.402565  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:24:30.423580  936132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa Username:docker}
	I0717 20:24:30.519791  936132 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:24:30.523976  936132 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:24:30.524019  936132 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:24:30.524031  936132 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:24:30.524038  936132 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 20:24:30.524052  936132 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:24:30.524115  936132 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:24:30.524203  936132 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> 9039972.pem in /etc/ssl/certs
	I0717 20:24:30.524211  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> /etc/ssl/certs/9039972.pem
	I0717 20:24:30.524319  936132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:24:30.534914  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:24:30.563371  936132 start.go:303] post-start completed in 160.892711ms
	I0717 20:24:30.563745  936132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-786531
	I0717 20:24:30.581793  936132 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/config.json ...
	I0717 20:24:30.582079  936132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:24:30.582142  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:24:30.604420  936132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa Username:docker}
	I0717 20:24:30.694745  936132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:24:30.700424  936132 start.go:128] duration metric: createHost completed in 10.147660882s
	I0717 20:24:30.700457  936132 start.go:83] releasing machines lock for "ingress-addon-legacy-786531", held for 10.147794223s
	I0717 20:24:30.700529  936132 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-786531
	I0717 20:24:30.717920  936132 ssh_runner.go:195] Run: cat /version.json
	I0717 20:24:30.717973  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:24:30.717972  936132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:24:30.718032  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:24:30.740198  936132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa Username:docker}
	I0717 20:24:30.751838  936132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa Username:docker}
	W0717 20:24:30.982404  936132 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 20:24:30.982486  936132 ssh_runner.go:195] Run: systemctl --version
	I0717 20:24:30.988424  936132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 20:24:30.995042  936132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 20:24:31.029832  936132 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 20:24:31.029918  936132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:24:31.063715  936132 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 20:24:31.063777  936132 start.go:469] detecting cgroup driver to use...
	I0717 20:24:31.063826  936132 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 20:24:31.063895  936132 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 20:24:31.080172  936132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 20:24:31.094992  936132 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:24:31.095063  936132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:24:31.112455  936132 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:24:31.130340  936132 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 20:24:31.222503  936132 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:24:31.331495  936132 docker.go:212] disabling docker service ...
	I0717 20:24:31.331576  936132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:24:31.354230  936132 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:24:31.369749  936132 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:24:31.470035  936132 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:24:31.570838  936132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:24:31.584154  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:24:31.605228  936132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0717 20:24:31.617510  936132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 20:24:31.629398  936132 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 20:24:31.629516  936132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 20:24:31.641527  936132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:24:31.653712  936132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 20:24:31.665581  936132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:24:31.677537  936132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:24:31.688982  936132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 20:24:31.701282  936132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:24:31.711945  936132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:24:31.722644  936132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:24:31.814610  936132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:24:31.900350  936132 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 20:24:31.900436  936132 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 20:24:31.905737  936132 start.go:537] Will wait 60s for crictl version
	I0717 20:24:31.905812  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:31.910731  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:24:31.954606  936132 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0717 20:24:31.954675  936132 ssh_runner.go:195] Run: containerd --version
	I0717 20:24:31.983911  936132 ssh_runner.go:195] Run: containerd --version
	I0717 20:24:32.024386  936132 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.21 ...
	I0717 20:24:32.026975  936132 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-786531 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:24:32.045768  936132 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 20:24:32.050666  936132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:24:32.064589  936132 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0717 20:24:32.064660  936132 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:24:32.110096  936132 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 20:24:32.110181  936132 ssh_runner.go:195] Run: which lz4
	I0717 20:24:32.114837  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0717 20:24:32.114990  936132 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 20:24:32.119942  936132 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 20:24:32.119980  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0717 20:24:34.368363  936132 containerd.go:547] Took 2.253449 seconds to copy over tarball
	I0717 20:24:34.368513  936132 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 20:24:37.158156  936132 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.789599228s)
	I0717 20:24:37.158227  936132 containerd.go:554] Took 2.789797 seconds to extract the tarball
	I0717 20:24:37.158255  936132 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 20:24:37.335636  936132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:24:37.437507  936132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:24:37.533449  936132 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:24:37.586310  936132 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 20:24:37.586450  936132 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 20:24:37.586669  936132 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 20:24:37.586739  936132 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 20:24:37.586809  936132 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 20:24:37.586940  936132 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:24:37.587100  936132 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 20:24:37.587201  936132 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 20:24:37.587522  936132 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 20:24:37.587728  936132 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 20:24:37.588166  936132 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 20:24:37.588371  936132 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 20:24:37.588611  936132 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 20:24:37.588749  936132 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 20:24:37.589698  936132 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 20:24:37.591202  936132 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:24:37.591344  936132 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 20:24:38.009027  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W0717 20:24:38.029168  936132 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 20:24:38.029583  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W0717 20:24:38.044741  936132 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 20:24:38.044960  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W0717 20:24:38.057030  936132 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0717 20:24:38.057246  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	W0717 20:24:38.058844  936132 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0717 20:24:38.059087  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W0717 20:24:38.060368  936132 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 20:24:38.060631  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W0717 20:24:38.073231  936132 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0717 20:24:38.073422  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W0717 20:24:38.237458  936132 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0717 20:24:38.237582  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 20:24:38.624379  936132 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0717 20:24:38.624420  936132 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 20:24:38.624469  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.816405  936132 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0717 20:24:38.816440  936132 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 20:24:38.816505  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.850081  936132 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0717 20:24:38.850167  936132 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 20:24:38.850249  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.865801  936132 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0717 20:24:38.865880  936132 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 20:24:38.865961  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.880426  936132 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0717 20:24:38.880467  936132 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 20:24:38.880524  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.957938  936132 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0717 20:24:38.957992  936132 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 20:24:38.958059  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.958125  936132 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0717 20:24:38.958147  936132 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 20:24:38.958177  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.973578  936132 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0717 20:24:38.973667  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 20:24:38.973589  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 20:24:38.973673  936132 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:24:38.973744  936132 ssh_runner.go:195] Run: which crictl
	I0717 20:24:38.973623  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 20:24:38.973808  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 20:24:38.973845  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 20:24:38.973892  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 20:24:38.973959  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 20:24:39.190695  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 20:24:39.190742  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 20:24:39.190773  936132 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:24:39.190799  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0717 20:24:39.190850  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 20:24:39.190919  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0717 20:24:39.190943  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0717 20:24:39.190964  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 20:24:39.250176  936132 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 20:24:39.250285  936132 cache_images.go:92] LoadImages completed in 1.663941689s
	W0717 20:24:39.250379  936132 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-898608/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0717 20:24:39.250437  936132 ssh_runner.go:195] Run: sudo crictl info
	I0717 20:24:39.295229  936132 cni.go:84] Creating CNI manager for ""
	I0717 20:24:39.295260  936132 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:24:39.295270  936132 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 20:24:39.295310  936132 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-786531 NodeName:ingress-addon-legacy-786531 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 20:24:39.295469  936132 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-786531"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 20:24:39.295571  936132 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-786531 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-786531 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 20:24:39.295656  936132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 20:24:39.306478  936132 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 20:24:39.306572  936132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 20:24:39.317462  936132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0717 20:24:39.339435  936132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 20:24:39.361761  936132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0717 20:24:39.383918  936132 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 20:24:39.388469  936132 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:24:39.402267  936132 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531 for IP: 192.168.49.2
	I0717 20:24:39.402339  936132 certs.go:190] acquiring lock for shared ca certs: {Name:mk081da4b0c80820af8357079096999320bef2ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:39.402505  936132 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key
	I0717 20:24:39.402552  936132 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key
	I0717 20:24:39.402604  936132 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.key
	I0717 20:24:39.402619  936132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt with IP's: []
	I0717 20:24:39.637037  936132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt ...
	I0717 20:24:39.637067  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: {Name:mk5c2bb85ff738ad3ba76f60f06585658c68715f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:39.637289  936132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.key ...
	I0717 20:24:39.637303  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.key: {Name:mkcae3e54533c311cde86a155b4d0d647436a6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:39.637399  936132 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.key.dd3b5fb2
	I0717 20:24:39.637418  936132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 20:24:40.028308  936132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.crt.dd3b5fb2 ...
	I0717 20:24:40.028348  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.crt.dd3b5fb2: {Name:mkcdc744762ab5c5d62773d171512cddfd0d3623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:40.028566  936132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.key.dd3b5fb2 ...
	I0717 20:24:40.028580  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.key.dd3b5fb2: {Name:mk95f7075936c49f6f459b2860d970ad930fceff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:40.028665  936132 certs.go:337] copying /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.crt
	I0717 20:24:40.028743  936132 certs.go:341] copying /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.key
	I0717 20:24:40.028796  936132 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.key
	I0717 20:24:40.028814  936132 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.crt with IP's: []
	I0717 20:24:40.324503  936132 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.crt ...
	I0717 20:24:40.324536  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.crt: {Name:mk20685e15147dd686d6db61e2983a1f48770e0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:40.324728  936132 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.key ...
	I0717 20:24:40.324742  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.key: {Name:mkcda7508dc93992c89337eb61f3f484ce690a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:24:40.324826  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 20:24:40.324846  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 20:24:40.324877  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 20:24:40.324901  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 20:24:40.324918  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 20:24:40.324935  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 20:24:40.324950  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 20:24:40.324962  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 20:24:40.325021  936132 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/903997.pem (1338 bytes)
	W0717 20:24:40.325063  936132 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/903997_empty.pem, impossibly tiny 0 bytes
	I0717 20:24:40.325077  936132 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 20:24:40.325109  936132 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem (1078 bytes)
	I0717 20:24:40.325136  936132 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem (1123 bytes)
	I0717 20:24:40.325167  936132 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem (1675 bytes)
	I0717 20:24:40.325213  936132 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:24:40.325249  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:24:40.325267  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/903997.pem -> /usr/share/ca-certificates/903997.pem
	I0717 20:24:40.325281  936132 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> /usr/share/ca-certificates/9039972.pem
	I0717 20:24:40.325862  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 20:24:40.357397  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 20:24:40.388102  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 20:24:40.418910  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 20:24:40.450129  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 20:24:40.478923  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 20:24:40.508053  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 20:24:40.537331  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 20:24:40.566902  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 20:24:40.596656  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/903997.pem --> /usr/share/ca-certificates/903997.pem (1338 bytes)
	I0717 20:24:40.626121  936132 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /usr/share/ca-certificates/9039972.pem (1708 bytes)
	I0717 20:24:40.654689  936132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 20:24:40.677040  936132 ssh_runner.go:195] Run: openssl version
	I0717 20:24:40.684477  936132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 20:24:40.696193  936132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:24:40.701066  936132 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 20:15 /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:24:40.701149  936132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:24:40.709938  936132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 20:24:40.722112  936132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/903997.pem && ln -fs /usr/share/ca-certificates/903997.pem /etc/ssl/certs/903997.pem"
	I0717 20:24:40.734663  936132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/903997.pem
	I0717 20:24:40.739637  936132 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 20:20 /usr/share/ca-certificates/903997.pem
	I0717 20:24:40.739707  936132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/903997.pem
	I0717 20:24:40.748912  936132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/903997.pem /etc/ssl/certs/51391683.0"
	I0717 20:24:40.767181  936132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9039972.pem && ln -fs /usr/share/ca-certificates/9039972.pem /etc/ssl/certs/9039972.pem"
	I0717 20:24:40.780337  936132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9039972.pem
	I0717 20:24:40.785287  936132 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 20:20 /usr/share/ca-certificates/9039972.pem
	I0717 20:24:40.785408  936132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9039972.pem
	I0717 20:24:40.794486  936132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9039972.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 20:24:40.806783  936132 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 20:24:40.811275  936132 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 20:24:40.811327  936132 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-786531 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-786531 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:24:40.811412  936132 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 20:24:40.811468  936132 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:24:40.855224  936132 cri.go:89] found id: ""
	I0717 20:24:40.855291  936132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 20:24:40.866749  936132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:24:40.878124  936132 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 20:24:40.878192  936132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:24:40.889047  936132 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 20:24:40.889098  936132 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 20:24:40.946245  936132 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 20:24:40.946605  936132 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 20:24:41.007318  936132 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 20:24:41.007389  936132 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-aws
	I0717 20:24:41.007427  936132 kubeadm.go:322] OS: Linux
	I0717 20:24:41.007481  936132 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 20:24:41.007531  936132 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 20:24:41.007579  936132 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 20:24:41.007645  936132 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 20:24:41.007695  936132 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 20:24:41.007745  936132 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 20:24:41.103586  936132 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 20:24:41.103696  936132 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 20:24:41.103809  936132 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 20:24:41.349554  936132 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 20:24:41.351266  936132 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 20:24:41.351346  936132 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 20:24:41.467720  936132 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 20:24:41.472176  936132 out.go:204]   - Generating certificates and keys ...
	I0717 20:24:41.472353  936132 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 20:24:41.472420  936132 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 20:24:42.134297  936132 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 20:24:42.842461  936132 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 20:24:43.249798  936132 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 20:24:44.041722  936132 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 20:24:44.197005  936132 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 20:24:44.197392  936132 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-786531 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 20:24:44.384014  936132 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 20:24:44.384290  936132 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-786531 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 20:24:45.186932  936132 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 20:24:45.608040  936132 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 20:24:45.861801  936132 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 20:24:45.862209  936132 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 20:24:46.319291  936132 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 20:24:46.586523  936132 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 20:24:47.983877  936132 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 20:24:48.712399  936132 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 20:24:48.713227  936132 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 20:24:48.716392  936132 out.go:204]   - Booting up control plane ...
	I0717 20:24:48.716505  936132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 20:24:48.728286  936132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 20:24:48.728401  936132 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 20:24:48.728491  936132 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 20:24:48.728638  936132 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 20:24:59.730275  936132 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.002482 seconds
	I0717 20:24:59.730396  936132 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 20:24:59.745247  936132 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 20:25:00.299950  936132 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 20:25:00.300107  936132 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-786531 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 20:25:00.811380  936132 kubeadm.go:322] [bootstrap-token] Using token: nitptj.lpmec0e46e4rpo6y
	I0717 20:25:00.814049  936132 out.go:204]   - Configuring RBAC rules ...
	I0717 20:25:00.814176  936132 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 20:25:00.822138  936132 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 20:25:00.833278  936132 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 20:25:00.837533  936132 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 20:25:00.842082  936132 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 20:25:00.847401  936132 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 20:25:00.862827  936132 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 20:25:01.142620  936132 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 20:25:01.251327  936132 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 20:25:01.251348  936132 kubeadm.go:322] 
	I0717 20:25:01.251409  936132 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 20:25:01.251425  936132 kubeadm.go:322] 
	I0717 20:25:01.251505  936132 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 20:25:01.251553  936132 kubeadm.go:322] 
	I0717 20:25:01.251577  936132 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 20:25:01.251638  936132 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 20:25:01.251693  936132 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 20:25:01.251702  936132 kubeadm.go:322] 
	I0717 20:25:01.251751  936132 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 20:25:01.251825  936132 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 20:25:01.251893  936132 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 20:25:01.251901  936132 kubeadm.go:322] 
	I0717 20:25:01.251988  936132 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 20:25:01.252064  936132 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 20:25:01.252071  936132 kubeadm.go:322] 
	I0717 20:25:01.252163  936132 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nitptj.lpmec0e46e4rpo6y \
	I0717 20:25:01.252268  936132 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c9abf7e3fe1433b2ee101b0228035f8f2ed73026247b5c044e3b341ae3a2cc41 \
	I0717 20:25:01.252293  936132 kubeadm.go:322]     --control-plane 
	I0717 20:25:01.252302  936132 kubeadm.go:322] 
	I0717 20:25:01.252381  936132 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 20:25:01.252390  936132 kubeadm.go:322] 
	I0717 20:25:01.252472  936132 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nitptj.lpmec0e46e4rpo6y \
	I0717 20:25:01.252574  936132 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c9abf7e3fe1433b2ee101b0228035f8f2ed73026247b5c044e3b341ae3a2cc41 
	I0717 20:25:01.256704  936132 kubeadm.go:322] W0717 20:24:40.945343    1100 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 20:25:01.256948  936132 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-aws\n", err: exit status 1
	I0717 20:25:01.257060  936132 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 20:25:01.257180  936132 kubeadm.go:322] W0717 20:24:48.722839    1100 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 20:25:01.257301  936132 kubeadm.go:322] W0717 20:24:48.724107    1100 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 20:25:01.257319  936132 cni.go:84] Creating CNI manager for ""
	I0717 20:25:01.257333  936132 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:25:01.259729  936132 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 20:25:01.262037  936132 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 20:25:01.268243  936132 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0717 20:25:01.268264  936132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 20:25:01.293337  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 20:25:01.768812  936132 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 20:25:01.769050  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:01.769126  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=ingress-addon-legacy-786531 minikube.k8s.io/updated_at=2023_07_17T20_25_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:01.783013  936132 ops.go:34] apiserver oom_adj: -16
	I0717 20:25:01.942597  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:02.561633  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:03.061857  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:03.562106  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:04.061351  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:04.561385  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:05.062248  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:05.561356  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:06.062158  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:06.562057  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:07.061589  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:07.561385  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:08.061737  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:08.561996  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:09.062044  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:09.562283  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:10.062208  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:10.562346  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:11.062245  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:11.561901  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:12.061854  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:12.562102  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:13.061597  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:13.562282  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:14.061341  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:14.561973  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:15.061939  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:15.561365  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:16.061652  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:16.561328  936132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 20:25:16.748129  936132 kubeadm.go:1081] duration metric: took 14.979161432s to wait for elevateKubeSystemPrivileges.
	I0717 20:25:16.748160  936132 kubeadm.go:406] StartCluster complete in 35.93683762s
	I0717 20:25:16.748176  936132 settings.go:142] acquiring lock: {Name:mk07e0d8498fadd24504785e1ba3db0cfccaf251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:25:16.748240  936132 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:25:16.749034  936132 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/kubeconfig: {Name:mk933d9b210c77bbf248211a6ac799f4302f2fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:25:16.749728  936132 kapi.go:59] client config for ingress-addon-legacy-786531: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.key", CAFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 20:25:16.751135  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 20:25:16.751395  936132 config.go:182] Loaded profile config "ingress-addon-legacy-786531": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0717 20:25:16.751430  936132 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 20:25:16.751492  936132 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-786531"
	I0717 20:25:16.751504  936132 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-786531"
	I0717 20:25:16.751537  936132 host.go:66] Checking if "ingress-addon-legacy-786531" exists ...
	I0717 20:25:16.751969  936132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-786531 --format={{.State.Status}}
	I0717 20:25:16.752600  936132 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 20:25:16.752635  936132 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-786531"
	I0717 20:25:16.752648  936132 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-786531"
	I0717 20:25:16.753012  936132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-786531 --format={{.State.Status}}
	I0717 20:25:16.794397  936132 kapi.go:59] client config for ingress-addon-legacy-786531: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.key", CAFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 20:25:16.802755  936132 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-786531"
	I0717 20:25:16.802810  936132 host.go:66] Checking if "ingress-addon-legacy-786531" exists ...
	I0717 20:25:16.803254  936132 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-786531 --format={{.State.Status}}
	I0717 20:25:16.824886  936132 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 20:25:16.827379  936132 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:25:16.827402  936132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 20:25:16.827472  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:25:16.845030  936132 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 20:25:16.845050  936132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 20:25:16.845123  936132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-786531
	I0717 20:25:16.883546  936132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa Username:docker}
	I0717 20:25:16.898362  936132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/ingress-addon-legacy-786531/id_rsa Username:docker}
	I0717 20:25:17.087212  936132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 20:25:17.150368  936132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 20:25:17.173588  936132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 20:25:17.336760  936132 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-786531" context rescaled to 1 replicas
	I0717 20:25:17.336893  936132 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0717 20:25:17.339548  936132 out.go:177] * Verifying Kubernetes components...
	I0717 20:25:17.341905  936132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:25:17.698270  936132 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 20:25:17.869066  936132 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 20:25:17.867400  936132 kapi.go:59] client config for ingress-addon-legacy-786531: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.key", CAFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 20:25:17.872120  936132 addons.go:502] enable addons completed in 1.120677091s: enabled=[default-storageclass storage-provisioner]
	I0717 20:25:17.869554  936132 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-786531" to be "Ready" ...
	I0717 20:25:17.894823  936132 node_ready.go:49] node "ingress-addon-legacy-786531" has status "Ready":"True"
	I0717 20:25:17.894848  936132 node_ready.go:38] duration metric: took 22.681972ms waiting for node "ingress-addon-legacy-786531" to be "Ready" ...
	I0717 20:25:17.894858  936132 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:25:17.908160  936132 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:19.921932  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:22.421166  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:24.921594  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:27.420514  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:29.920459  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:32.420912  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:34.420977  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:36.421381  936132 pod_ready.go:102] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"False"
	I0717 20:25:36.921277  936132 pod_ready.go:92] pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace has status "Ready":"True"
	I0717 20:25:36.921305  936132 pod_ready.go:81] duration metric: took 19.013066323s waiting for pod "coredns-66bff467f8-xhw9x" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.921320  936132 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.926273  936132 pod_ready.go:92] pod "etcd-ingress-addon-legacy-786531" in "kube-system" namespace has status "Ready":"True"
	I0717 20:25:36.926299  936132 pod_ready.go:81] duration metric: took 4.97064ms waiting for pod "etcd-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.926317  936132 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.931133  936132 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-786531" in "kube-system" namespace has status "Ready":"True"
	I0717 20:25:36.931159  936132 pod_ready.go:81] duration metric: took 4.832655ms waiting for pod "kube-apiserver-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.931170  936132 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.936127  936132 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-786531" in "kube-system" namespace has status "Ready":"True"
	I0717 20:25:36.936151  936132 pod_ready.go:81] duration metric: took 4.973233ms waiting for pod "kube-controller-manager-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.936163  936132 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n9sl2" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.941155  936132 pod_ready.go:92] pod "kube-proxy-n9sl2" in "kube-system" namespace has status "Ready":"True"
	I0717 20:25:36.941179  936132 pod_ready.go:81] duration metric: took 5.009114ms waiting for pod "kube-proxy-n9sl2" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:36.941190  936132 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:37.116537  936132 request.go:628] Waited for 175.263874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-786531
	I0717 20:25:37.316424  936132 request.go:628] Waited for 197.132052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-786531
	I0717 20:25:37.319269  936132 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-786531" in "kube-system" namespace has status "Ready":"True"
	I0717 20:25:37.319297  936132 pod_ready.go:81] duration metric: took 378.099616ms waiting for pod "kube-scheduler-ingress-addon-legacy-786531" in "kube-system" namespace to be "Ready" ...
	I0717 20:25:37.319309  936132 pod_ready.go:38] duration metric: took 19.42437175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 20:25:37.319329  936132 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:25:37.319390  936132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:25:37.333035  936132 api_server.go:72] duration metric: took 19.99608234s to wait for apiserver process to appear ...
	I0717 20:25:37.333061  936132 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:25:37.333077  936132 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 20:25:37.342150  936132 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 20:25:37.343229  936132 api_server.go:141] control plane version: v1.18.20
	I0717 20:25:37.343252  936132 api_server.go:131] duration metric: took 10.184761ms to wait for apiserver health ...
	I0717 20:25:37.343261  936132 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 20:25:37.516657  936132 request.go:628] Waited for 173.296607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 20:25:37.522744  936132 system_pods.go:59] 8 kube-system pods found
	I0717 20:25:37.522783  936132 system_pods.go:61] "coredns-66bff467f8-xhw9x" [95bdde88-3be3-4fc2-aa32-36223d6f6309] Running
	I0717 20:25:37.522790  936132 system_pods.go:61] "etcd-ingress-addon-legacy-786531" [8b751dc4-6eb8-4b14-bf75-94376840d797] Running
	I0717 20:25:37.522827  936132 system_pods.go:61] "kindnet-ck56c" [e745e93f-70db-4139-a120-6823585aeb74] Running
	I0717 20:25:37.522839  936132 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-786531" [d896a629-31aa-478b-8309-45d1cf9003e9] Running
	I0717 20:25:37.522844  936132 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-786531" [20b7594f-d3ab-49a1-a86a-d44b34ae3da5] Running
	I0717 20:25:37.522855  936132 system_pods.go:61] "kube-proxy-n9sl2" [ecffb273-c7a2-4faa-a973-e2ff93018c62] Running
	I0717 20:25:37.522861  936132 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-786531" [932aaff0-5bd7-42e9-9e3d-0b52129a98bf] Running
	I0717 20:25:37.522867  936132 system_pods.go:61] "storage-provisioner" [158a5fce-775a-42f0-9267-208955d20e79] Running
	I0717 20:25:37.522874  936132 system_pods.go:74] duration metric: took 179.608158ms to wait for pod list to return data ...
	I0717 20:25:37.522907  936132 default_sa.go:34] waiting for default service account to be created ...
	I0717 20:25:37.716289  936132 request.go:628] Waited for 193.304701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 20:25:37.718872  936132 default_sa.go:45] found service account: "default"
	I0717 20:25:37.718903  936132 default_sa.go:55] duration metric: took 195.987016ms for default service account to be created ...
	I0717 20:25:37.718917  936132 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 20:25:37.916308  936132 request.go:628] Waited for 197.31067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 20:25:37.922408  936132 system_pods.go:86] 8 kube-system pods found
	I0717 20:25:37.922481  936132 system_pods.go:89] "coredns-66bff467f8-xhw9x" [95bdde88-3be3-4fc2-aa32-36223d6f6309] Running
	I0717 20:25:37.922500  936132 system_pods.go:89] "etcd-ingress-addon-legacy-786531" [8b751dc4-6eb8-4b14-bf75-94376840d797] Running
	I0717 20:25:37.922507  936132 system_pods.go:89] "kindnet-ck56c" [e745e93f-70db-4139-a120-6823585aeb74] Running
	I0717 20:25:37.922512  936132 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-786531" [d896a629-31aa-478b-8309-45d1cf9003e9] Running
	I0717 20:25:37.922518  936132 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-786531" [20b7594f-d3ab-49a1-a86a-d44b34ae3da5] Running
	I0717 20:25:37.922523  936132 system_pods.go:89] "kube-proxy-n9sl2" [ecffb273-c7a2-4faa-a973-e2ff93018c62] Running
	I0717 20:25:37.922528  936132 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-786531" [932aaff0-5bd7-42e9-9e3d-0b52129a98bf] Running
	I0717 20:25:37.922533  936132 system_pods.go:89] "storage-provisioner" [158a5fce-775a-42f0-9267-208955d20e79] Running
	I0717 20:25:37.922540  936132 system_pods.go:126] duration metric: took 203.61757ms to wait for k8s-apps to be running ...
	I0717 20:25:37.922547  936132 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 20:25:37.922604  936132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:25:37.936104  936132 system_svc.go:56] duration metric: took 13.538159ms WaitForService to wait for kubelet.
	I0717 20:25:37.936134  936132 kubeadm.go:581] duration metric: took 20.599186726s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 20:25:37.936154  936132 node_conditions.go:102] verifying NodePressure condition ...
	I0717 20:25:38.116540  936132 request.go:628] Waited for 180.31829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0717 20:25:38.119377  936132 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0717 20:25:38.119407  936132 node_conditions.go:123] node cpu capacity is 2
	I0717 20:25:38.119419  936132 node_conditions.go:105] duration metric: took 183.259386ms to run NodePressure ...
	I0717 20:25:38.119450  936132 start.go:228] waiting for startup goroutines ...
	I0717 20:25:38.119462  936132 start.go:233] waiting for cluster config update ...
	I0717 20:25:38.119472  936132 start.go:242] writing updated cluster config ...
	I0717 20:25:38.119778  936132 ssh_runner.go:195] Run: rm -f paused
	I0717 20:25:38.180227  936132 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 20:25:38.182981  936132 out.go:177] 
	W0717 20:25:38.185235  936132 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 20:25:38.187200  936132 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 20:25:38.189091  936132 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-786531" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	db94681ede8b9       13753a81eccfd       13 seconds ago       Exited              hello-world-app           2                   430d0fd351e50       hello-world-app-5f5d8b66bb-wbz68
	8720c92301470       66bf2c914bf4d       37 seconds ago       Running             nginx                     0                   84452a3f249a3       nginx
	f07256a29e6bd       d7f0cba3aa5bf       55 seconds ago       Exited              controller                0                   fdc04d1c1f464       ingress-nginx-controller-7fcf777cb7-5h6ph
	a241a473c628d       a883f7fc35610       About a minute ago   Exited              patch                     0                   2655bcf090327       ingress-nginx-admission-patch-hgjn7
	940a90574792c       a883f7fc35610       About a minute ago   Exited              create                    0                   1c64a1875da84       ingress-nginx-admission-create-4zc4n
	1363a27058838       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   8768e3c9fef6c       coredns-66bff467f8-xhw9x
	13cb1981feedf       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   5c15ad85d155d       storage-provisioner
	cd3e294d6ca90       b18bf71b941ba       About a minute ago   Running             kindnet-cni               0                   fc6fd768cf078       kindnet-ck56c
	5271b6481f734       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   d4bb42aa64121       kube-proxy-n9sl2
	4ed0cb27c5c57       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   d1846ec9d1e1a       etcd-ingress-addon-legacy-786531
	cb29d6420ff85       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   7793c277251c7       kube-controller-manager-ingress-addon-legacy-786531
	2e6ddd007817e       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   1ccd3b69b5eb6       kube-apiserver-ingress-addon-legacy-786531
	6a9e61a4473bc       095f37015706d       About a minute ago   Running             kube-scheduler            0                   28cbc2bf956cf       kube-scheduler-ingress-addon-legacy-786531
	
	* 
	* ==> containerd <==
	* Jul 17 20:26:30 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:30.860582982Z" level=info msg="RemoveContainer for \"c90f41f2449f3e66f64485b411defc5c0ee08f1ef0e68df9e72e063d60196861\" returns successfully"
	Jul 17 20:26:35 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:35.514730964Z" level=info msg="StopContainer for \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" with timeout 2 (s)"
	Jul 17 20:26:35 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:35.514794562Z" level=info msg="StopContainer for \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" with timeout 2 (s)"
	Jul 17 20:26:35 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:35.516189619Z" level=info msg="Skipping the sending of signal terminated to container \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" because a prior stop with timeout>0 request already sent the signal"
	Jul 17 20:26:35 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:35.516265024Z" level=info msg="Stop container \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" with signal terminated"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.516904221Z" level=info msg="Kill container \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\""
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.536173928Z" level=info msg="Kill container \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\""
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.602841941Z" level=info msg="shim disconnected" id=f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.602900706Z" level=warning msg="cleaning up after shim disconnected" id=f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37 namespace=k8s.io
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.602911045Z" level=info msg="cleaning up dead shim"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.615352184Z" level=warning msg="cleanup warnings time=\"2023-07-17T20:26:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4554 runtime=io.containerd.runc.v2\n"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.618234278Z" level=info msg="StopContainer for \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" returns successfully"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.618232907Z" level=info msg="StopContainer for \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" returns successfully"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.618835773Z" level=info msg="StopPodSandbox for \"fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381\""
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.619000040Z" level=info msg="Container to stop \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.618936811Z" level=info msg="StopPodSandbox for \"fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381\""
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.619273272Z" level=info msg="Container to stop \"f07256a29e6bd27363be927f88efa2aeabf10fce4f8c8ee962a5283781761c37\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.672643699Z" level=info msg="shim disconnected" id=fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.673004045Z" level=warning msg="cleaning up after shim disconnected" id=fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381 namespace=k8s.io
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.673100636Z" level=info msg="cleaning up dead shim"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.684276877Z" level=warning msg="cleanup warnings time=\"2023-07-17T20:26:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4593 runtime=io.containerd.runc.v2\n"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.736052846Z" level=info msg="TearDown network for sandbox \"fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381\" successfully"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.736230176Z" level=info msg="StopPodSandbox for \"fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381\" returns successfully"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.739008449Z" level=info msg="TearDown network for sandbox \"fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381\" successfully"
	Jul 17 20:26:37 ingress-addon-legacy-786531 containerd[820]: time="2023-07-17T20:26:37.739058402Z" level=info msg="StopPodSandbox for \"fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381\" returns successfully"
	
	* 
	* ==> coredns [1363a270588381dc8be6f0ac565f36a3b56a2fc6aab1f2984efa752804f7b5a7] <==
	* [INFO] 10.244.0.5:59933 - 34638 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038531s
	[INFO] 10.244.0.5:56734 - 29926 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00198797s
	[INFO] 10.244.0.5:59933 - 21908 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001276057s
	[INFO] 10.244.0.5:59933 - 55492 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00106594s
	[INFO] 10.244.0.5:59933 - 4812 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000106093s
	[INFO] 10.244.0.5:56734 - 19851 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002847164s
	[INFO] 10.244.0.5:56734 - 7291 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005623s
	[INFO] 10.244.0.5:36692 - 46301 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089174s
	[INFO] 10.244.0.5:40088 - 42563 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061194s
	[INFO] 10.244.0.5:36692 - 16948 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060759s
	[INFO] 10.244.0.5:36692 - 20612 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058577s
	[INFO] 10.244.0.5:40088 - 14495 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000095081s
	[INFO] 10.244.0.5:40088 - 35896 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043101s
	[INFO] 10.244.0.5:36692 - 20282 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042026s
	[INFO] 10.244.0.5:40088 - 18649 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058494s
	[INFO] 10.244.0.5:40088 - 575 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043298s
	[INFO] 10.244.0.5:40088 - 28001 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043496s
	[INFO] 10.244.0.5:36692 - 2239 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005445s
	[INFO] 10.244.0.5:36692 - 62392 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055221s
	[INFO] 10.244.0.5:40088 - 14711 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002121467s
	[INFO] 10.244.0.5:36692 - 35663 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001470717s
	[INFO] 10.244.0.5:40088 - 20727 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006151634s
	[INFO] 10.244.0.5:40088 - 53980 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046999s
	[INFO] 10.244.0.5:36692 - 29293 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001514015s
	[INFO] 10.244.0.5:36692 - 852 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053793s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-786531
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-786531
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=ingress-addon-legacy-786531
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T20_25_01_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:24:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-786531
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:26:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:26:34 +0000   Mon, 17 Jul 2023 20:24:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:26:34 +0000   Mon, 17 Jul 2023 20:24:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:26:34 +0000   Mon, 17 Jul 2023 20:24:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 20:26:34 +0000   Mon, 17 Jul 2023 20:25:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-786531
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022628Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd31ad6e490940abbf60537747f216bb
	  System UUID:                b337188a-1265-47c0-9766-87054966304e
	  Boot ID:                    cbdc664b-32f3-4468-95d3-fdbd4fe2a3f0
	  Kernel Version:             5.15.0-1039-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.21
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-wbz68                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-66bff467f8-xhw9x                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-ingress-addon-legacy-786531                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-ck56c                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-786531             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-786531    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-n9sl2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-786531             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  113s (x4 over 113s)  kubelet     Node ingress-addon-legacy-786531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-786531 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet     Node ingress-addon-legacy-786531 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-786531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-786531 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-786531 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-786531 status is now: NodeReady
	  Normal  Starting                 86s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001106] FS-Cache: O-key=[8] '4c72ed0000000000'
	[  +0.000795] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000f945483a
	[  +0.001136] FS-Cache: N-key=[8] '4c72ed0000000000'
	[  +0.003226] FS-Cache: Duplicate cookie detected
	[  +0.000785] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001034] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000fde5d07c
	[  +0.001069] FS-Cache: O-key=[8] '4c72ed0000000000'
	[  +0.000738] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=0000000090c83777
	[  +0.001137] FS-Cache: N-key=[8] '4c72ed0000000000'
	[  +2.706152] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000960] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000c5602f0f
	[  +0.001080] FS-Cache: O-key=[8] '4b72ed0000000000'
	[  +0.000733] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000f945483a
	[  +0.001049] FS-Cache: N-key=[8] '4b72ed0000000000'
	[  +0.350862] FS-Cache: Duplicate cookie detected
	[  +0.000762] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001000] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=000000000634e0be
	[  +0.001088] FS-Cache: O-key=[8] '5172ed0000000000'
	[  +0.000720] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000a176162d
	[  +0.001049] FS-Cache: N-key=[8] '5172ed0000000000'
	
	* 
	* ==> etcd [4ed0cb27c5c57d0ff0a9354c35f1ba2f5d64e62906ea87704b55d1bac6cc42a1] <==
	* raft2023/07/17 20:24:52 INFO: aec36adc501070cc became follower at term 0
	raft2023/07/17 20:24:52 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/17 20:24:52 INFO: aec36adc501070cc became follower at term 1
	raft2023/07/17 20:24:52 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 20:24:53.050331 W | auth: simple token is not cryptographically signed
	2023-07-17 20:24:53.125477 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-17 20:24:53.191030 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 20:24:53.421063 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-17 20:24:53.470422 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-17 20:24:53.470914 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/07/17 20:24:53 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 20:24:53.471210 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/07/17 20:24:53 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/17 20:24:53 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/17 20:24:53 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/17 20:24:53 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/17 20:24:53 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-17 20:24:53.706160 I | etcdserver: published {Name:ingress-addon-legacy-786531 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-17 20:24:53.706458 I | embed: ready to serve client requests
	2023-07-17 20:24:53.708133 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-17 20:24:53.708330 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 20:24:53.708797 I | embed: ready to serve client requests
	2023-07-17 20:24:53.710341 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 20:24:53.724000 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 20:24:53.724309 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  20:26:43 up  4:09,  0 users,  load average: 0.99, 1.66, 1.97
	Linux ingress-addon-legacy-786531 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [cd3e294d6ca90e31b9b80ba91cf52bba9b25547d0cb7820a03c30a9e0fbed741] <==
	* I0717 20:25:19.036680       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 20:25:19.036747       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0717 20:25:19.037084       1 main.go:116] setting mtu 1500 for CNI 
	I0717 20:25:19.037149       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 20:25:19.037192       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 20:25:19.433883       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:25:19.433922       1 main.go:227] handling current node
	I0717 20:25:29.443844       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:25:29.443875       1 main.go:227] handling current node
	I0717 20:25:39.456258       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:25:39.456286       1 main.go:227] handling current node
	I0717 20:25:49.459835       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:25:49.459864       1 main.go:227] handling current node
	I0717 20:25:59.471826       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:25:59.471856       1 main.go:227] handling current node
	I0717 20:26:09.478971       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:26:09.479002       1 main.go:227] handling current node
	I0717 20:26:19.482204       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:26:19.482235       1 main.go:227] handling current node
	I0717 20:26:29.485690       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:26:29.485723       1 main.go:227] handling current node
	I0717 20:26:39.496083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 20:26:39.496112       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [2e6ddd007817e20d60910052d38bf75e5436849c7e8b961823bea4cbcee5ce15] <==
	* I0717 20:24:57.813486       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0717 20:24:57.890845       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 20:24:58.007533       1 cache.go:39] Caches are synced for autoregister controller
	I0717 20:24:58.012780       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 20:24:58.012965       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 20:24:58.013060       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 20:24:58.018971       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 20:24:58.806393       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 20:24:58.806439       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 20:24:58.817095       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 20:24:58.820733       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 20:24:58.820757       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 20:24:59.319023       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 20:24:59.362628       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 20:24:59.481168       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 20:24:59.482180       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 20:24:59.486214       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 20:25:00.275695       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 20:25:01.123522       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 20:25:01.223663       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 20:25:04.557500       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 20:25:16.294939       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 20:25:16.561575       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 20:25:39.119538       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 20:26:04.119883       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [cb29d6420ff852c168a29d20e779c853528f1fbdf5dee76b17f7431da989af5f] <==
	* I0717 20:25:16.602642       1 shared_informer.go:230] Caches are synced for attach detach 
	I0717 20:25:16.604762       1 shared_informer.go:230] Caches are synced for node 
	I0717 20:25:16.604796       1 range_allocator.go:172] Starting range CIDR allocator
	I0717 20:25:16.604808       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I0717 20:25:16.604812       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I0717 20:25:16.627674       1 range_allocator.go:373] Set node ingress-addon-legacy-786531 PodCIDR to [10.244.0.0/24]
	I0717 20:25:16.628113       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"3438d1a3-28af-4b08-84f4-5023fc71c671", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-n9sl2
	E0717 20:25:16.628466       1 range_allocator.go:361] Node ingress-addon-legacy-786531 already has a CIDR allocated [10.244.0.0/24]. Releasing the new one.
	I0717 20:25:16.698546       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I0717 20:25:16.738648       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I0717 20:25:16.813115       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 20:25:16.828671       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 20:25:16.842156       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"5368dd1c-b359-4c40-a5bb-75caeeba6fa7", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0717 20:25:16.865826       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 20:25:16.865849       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 20:25:16.904135       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 20:25:16.920019       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"03b6713f-d43f-422a-a982-9a7a0e748974", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-tsbbq
	I0717 20:25:39.105260       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9c36ddea-fb41-400c-9b11-202d9c9597fb", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 20:25:39.137834       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"92131d82-d549-4653-b92b-646c5607618a", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-5h6ph
	I0717 20:25:39.149947       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6279ed8e-79cd-4e86-b59b-8438ea843a9c", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-4zc4n
	I0717 20:25:39.220684       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ccc0e436-8997-4613-bea3-dc13598daf0f", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-hgjn7
	I0717 20:25:41.718583       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6279ed8e-79cd-4e86-b59b-8438ea843a9c", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 20:25:41.741845       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"ccc0e436-8997-4613-bea3-dc13598daf0f", APIVersion:"batch/v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 20:26:12.872708       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"7ba3aabb-75b5-400f-8988-5d25a24b7624", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0717 20:26:12.880984       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"125130f2-7a45-4afa-9b4e-2d6e24ad586b", APIVersion:"apps/v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-wbz68
	
	* 
	* ==> kube-proxy [5271b6481f73488e4156f703923c4f0abbed6457b3d522224e6e9a441156bc9a] <==
	* W0717 20:25:17.509118       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 20:25:17.525658       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0717 20:25:17.525705       1 server_others.go:186] Using iptables Proxier.
	I0717 20:25:17.526004       1 server.go:583] Version: v1.18.20
	I0717 20:25:17.530322       1 config.go:315] Starting service config controller
	I0717 20:25:17.533294       1 config.go:133] Starting endpoints config controller
	I0717 20:25:17.533312       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 20:25:17.533454       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 20:25:17.633483       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0717 20:25:17.633647       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6a9e61a4473bcdf3d3a9eae0dbf9e7e4c1adcfed334fa7b6e77f5d3416c3e179] <==
	* I0717 20:24:58.038592       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 20:24:58.041081       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 20:24:58.041254       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 20:24:58.042758       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0717 20:24:58.042947       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 20:24:58.053274       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 20:24:58.053896       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 20:24:58.054170       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 20:24:58.054400       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 20:24:58.054623       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 20:24:58.054842       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 20:24:58.055111       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:24:58.055311       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:24:58.055543       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:24:58.055775       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:24:58.056083       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:24:58.056309       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 20:24:58.894737       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 20:24:58.930662       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 20:24:58.935445       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 20:24:58.942158       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 20:24:59.052031       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 20:24:59.257115       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 20:25:01.541691       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0717 20:25:16.390115       1 factory.go:503] pod: kube-system/coredns-66bff467f8-xhw9x is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Jul 17 20:26:16 ingress-addon-legacy-786531 kubelet[1628]: E0717 20:26:16.816984    1628 pod_workers.go:191] Error syncing pod 0243d16e-4286-4d19-af16-acdf837acf82 ("hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"
	Jul 17 20:26:17 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:17.819722    1628 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30a5e4a9300b8a2e03d9a3cabae3adde68f89095324c10b27c2d42a0f994336d
	Jul 17 20:26:17 ingress-addon-legacy-786531 kubelet[1628]: E0717 20:26:17.819981    1628 pod_workers.go:191] Error syncing pod 0243d16e-4286-4d19-af16-acdf837acf82 ("hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"
	Jul 17 20:26:25 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:25.578186    1628 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c90f41f2449f3e66f64485b411defc5c0ee08f1ef0e68df9e72e063d60196861
	Jul 17 20:26:25 ingress-addon-legacy-786531 kubelet[1628]: E0717 20:26:25.579016    1628 pod_workers.go:191] Error syncing pod 355deb78-e148-4751-b577-8f20a3daa916 ("kube-ingress-dns-minikube_kube-system(355deb78-e148-4751-b577-8f20a3daa916)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(355deb78-e148-4751-b577-8f20a3daa916)"
	Jul 17 20:26:28 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:28.860588    1628 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-gn2ss" (UniqueName: "kubernetes.io/secret/355deb78-e148-4751-b577-8f20a3daa916-minikube-ingress-dns-token-gn2ss") pod "355deb78-e148-4751-b577-8f20a3daa916" (UID: "355deb78-e148-4751-b577-8f20a3daa916")
	Jul 17 20:26:28 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:28.864973    1628 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355deb78-e148-4751-b577-8f20a3daa916-minikube-ingress-dns-token-gn2ss" (OuterVolumeSpecName: "minikube-ingress-dns-token-gn2ss") pod "355deb78-e148-4751-b577-8f20a3daa916" (UID: "355deb78-e148-4751-b577-8f20a3daa916"). InnerVolumeSpecName "minikube-ingress-dns-token-gn2ss". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 20:26:28 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:28.960958    1628 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-gn2ss" (UniqueName: "kubernetes.io/secret/355deb78-e148-4751-b577-8f20a3daa916-minikube-ingress-dns-token-gn2ss") on node "ingress-addon-legacy-786531" DevicePath ""
	Jul 17 20:26:29 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:29.577921    1628 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30a5e4a9300b8a2e03d9a3cabae3adde68f89095324c10b27c2d42a0f994336d
	Jul 17 20:26:29 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:29.842618    1628 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 30a5e4a9300b8a2e03d9a3cabae3adde68f89095324c10b27c2d42a0f994336d
	Jul 17 20:26:29 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:29.843017    1628 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: db94681ede8b995371fbac2ba4963398e35b30db1de2adcec981feb9d98635fb
	Jul 17 20:26:29 ingress-addon-legacy-786531 kubelet[1628]: E0717 20:26:29.843257    1628 pod_workers.go:191] Error syncing pod 0243d16e-4286-4d19-af16-acdf837acf82 ("hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"
	Jul 17 20:26:30 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:30.848454    1628 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c90f41f2449f3e66f64485b411defc5c0ee08f1ef0e68df9e72e063d60196861
	Jul 17 20:26:35 ingress-addon-legacy-786531 kubelet[1628]: E0717 20:26:35.515393    1628 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5h6ph.1772c186efed2158", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5h6ph", UID:"7bc7130c-b364-4357-acb9-b2e8be8f80b0", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-786531"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12586dede8b1358, ext:94463874938, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12586dede8b1358, ext:94463874938, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5h6ph.1772c186efed2158" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 20:26:35 ingress-addon-legacy-786531 kubelet[1628]: E0717 20:26:35.535134    1628 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-5h6ph.1772c186efed2158", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-5h6ph", UID:"7bc7130c-b364-4357-acb9-b2e8be8f80b0", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-786531"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12586dede8b1358, ext:94463874938, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12586dede8ad3b1, ext:94463858643, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-5h6ph.1772c186efed2158" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 20:26:37 ingress-addon-legacy-786531 kubelet[1628]: W0717 20:26:37.864184    1628 pod_container_deletor.go:77] Container "fdc04d1c1f46456444dfa11ae6d6e454fbaa9e47d434dc81a988c5bc42ee1381" not found in pod's containers
	Jul 17 20:26:39 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:39.693029    1628 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-rbdm8" (UniqueName: "kubernetes.io/secret/7bc7130c-b364-4357-acb9-b2e8be8f80b0-ingress-nginx-token-rbdm8") pod "7bc7130c-b364-4357-acb9-b2e8be8f80b0" (UID: "7bc7130c-b364-4357-acb9-b2e8be8f80b0")
	Jul 17 20:26:39 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:39.693099    1628 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7bc7130c-b364-4357-acb9-b2e8be8f80b0-webhook-cert") pod "7bc7130c-b364-4357-acb9-b2e8be8f80b0" (UID: "7bc7130c-b364-4357-acb9-b2e8be8f80b0")
	Jul 17 20:26:39 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:39.699478    1628 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc7130c-b364-4357-acb9-b2e8be8f80b0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7bc7130c-b364-4357-acb9-b2e8be8f80b0" (UID: "7bc7130c-b364-4357-acb9-b2e8be8f80b0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 20:26:39 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:39.700027    1628 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc7130c-b364-4357-acb9-b2e8be8f80b0-ingress-nginx-token-rbdm8" (OuterVolumeSpecName: "ingress-nginx-token-rbdm8") pod "7bc7130c-b364-4357-acb9-b2e8be8f80b0" (UID: "7bc7130c-b364-4357-acb9-b2e8be8f80b0"). InnerVolumeSpecName "ingress-nginx-token-rbdm8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 20:26:39 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:39.793485    1628 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7bc7130c-b364-4357-acb9-b2e8be8f80b0-webhook-cert") on node "ingress-addon-legacy-786531" DevicePath ""
	Jul 17 20:26:39 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:39.793541    1628 reconciler.go:319] Volume detached for volume "ingress-nginx-token-rbdm8" (UniqueName: "kubernetes.io/secret/7bc7130c-b364-4357-acb9-b2e8be8f80b0-ingress-nginx-token-rbdm8") on node "ingress-addon-legacy-786531" DevicePath ""
	Jul 17 20:26:40 ingress-addon-legacy-786531 kubelet[1628]: I0717 20:26:40.578087    1628 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: db94681ede8b995371fbac2ba4963398e35b30db1de2adcec981feb9d98635fb
	Jul 17 20:26:40 ingress-addon-legacy-786531 kubelet[1628]: E0717 20:26:40.578602    1628 pod_workers.go:191] Error syncing pod 0243d16e-4286-4d19-af16-acdf837acf82 ("hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-wbz68_default(0243d16e-4286-4d19-af16-acdf837acf82)"
	Jul 17 20:26:40 ingress-addon-legacy-786531 kubelet[1628]: W0717 20:26:40.586814    1628 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/7bc7130c-b364-4357-acb9-b2e8be8f80b0/volumes" does not exist
	
	* 
	* ==> storage-provisioner [13cb1981feedffc9b8620b37f748a537ecfedb19c522760492648b7fd68df50d] <==
	* I0717 20:25:20.224821       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 20:25:20.237376       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 20:25:20.237528       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 20:25:20.249972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 20:25:20.250245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-786531_dc8934dd-eb4e-400f-806b-298406163034!
	I0717 20:25:20.251594       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ccb8a7ca-1f7e-4f77-a60b-9f41110f0972", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-786531_dc8934dd-eb4e-400f-806b-298406163034 became leader
	I0717 20:25:20.350873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-786531_dc8934dd-eb4e-400f-806b-298406163034!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-786531 -n ingress-addon-legacy-786531
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-786531 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (55.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (244.83s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.22.0.3562752340.exe start -p missing-upgrade-200025 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.22.0.3562752340.exe start -p missing-upgrade-200025 --memory=2200 --driver=docker  --container-runtime=containerd: (2m14.927613162s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-200025
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-200025: (10.382480356s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-200025
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-200025 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-200025 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 90 (1m33.988771725s)

                                                
                                                
-- stdout --
	* [missing-upgrade-200025] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-200025 in cluster missing-upgrade-200025
	* Pulling base image ...
	* Downloading Kubernetes v1.21.2 preload ...
	* docker "missing-upgrade-200025" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 20:47:34.918293 1019794 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:47:34.918566 1019794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:47:34.918595 1019794 out.go:309] Setting ErrFile to fd 2...
	I0717 20:47:34.918614 1019794 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:47:34.918940 1019794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:47:34.919390 1019794 out.go:303] Setting JSON to false
	I0717 20:47:34.920509 1019794 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16202,"bootTime":1689610653,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:47:34.920616 1019794 start.go:138] virtualization:  
	I0717 20:47:34.923415 1019794 out.go:177] * [missing-upgrade-200025] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:47:34.927024 1019794 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:47:34.927124 1019794 notify.go:220] Checking for updates...
	I0717 20:47:34.932345 1019794 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:47:34.934771 1019794 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:47:34.936908 1019794 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:47:34.938670 1019794 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:47:34.940824 1019794 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:47:34.943610 1019794 config.go:182] Loaded profile config "missing-upgrade-200025": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 20:47:34.946220 1019794 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 20:47:34.948121 1019794 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:47:35.007715 1019794 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:47:35.007923 1019794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:47:35.173451 1019794 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-17 20:47:35.158655642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:47:35.173564 1019794 docker.go:294] overlay module found
	I0717 20:47:35.175729 1019794 out.go:177] * Using the docker driver based on existing profile
	I0717 20:47:35.177736 1019794 start.go:298] selected driver: docker
	I0717 20:47:35.177755 1019794 start.go:880] validating driver "docker" against &{Name:missing-upgrade-200025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-200025 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:47:35.177892 1019794 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:47:35.178517 1019794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:47:35.301991 1019794 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-07-17 20:47:35.289499001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:47:35.302315 1019794 cni.go:84] Creating CNI manager for ""
	I0717 20:47:35.302326 1019794 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:47:35.302340 1019794 start_flags.go:319] config:
	{Name:missing-upgrade-200025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:missing-upgrade-200025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.21.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0}
	I0717 20:47:35.305141 1019794 out.go:177] * Starting control plane node missing-upgrade-200025 in cluster missing-upgrade-200025
	I0717 20:47:35.307433 1019794 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 20:47:35.309508 1019794 out.go:177] * Pulling base image ...
	I0717 20:47:35.311288 1019794 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0717 20:47:35.311388 1019794 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0717 20:47:35.330763 1019794 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0717 20:47:35.330784 1019794 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0717 20:47:35.383504 1019794 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0717 20:47:35.383533 1019794 cache.go:57] Caching tarball of preloaded images
	I0717 20:47:35.383686 1019794 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0717 20:47:35.385681 1019794 out.go:177] * Downloading Kubernetes v1.21.2 preload ...
	I0717 20:47:35.387339 1019794 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:47:35.528537 1019794 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.21.2/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:f1e1f7bdb5d08690c839f70306158850 -> /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4
	I0717 20:47:46.105241 1019794 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:47:46.105348 1019794 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:47:47.156842 1019794 cache.go:60] Finished verifying existence of preloaded tar for  v1.21.2 on containerd
	I0717 20:47:47.157001 1019794 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/missing-upgrade-200025/config.json ...
	I0717 20:47:47.157232 1019794 cache.go:195] Successfully downloaded all kic artifacts
	I0717 20:47:47.157281 1019794 start.go:365] acquiring machines lock for missing-upgrade-200025: {Name:mk38eb4480aa7029156988338f8a0c94462d5bc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:47:47.157374 1019794 start.go:369] acquired machines lock for "missing-upgrade-200025" in 69.121µs
	I0717 20:47:47.157391 1019794 start.go:96] Skipping create...Using existing machine configuration
	I0717 20:47:47.157409 1019794 fix.go:54] fixHost starting: 
	I0717 20:47:47.157708 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:47.184644 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:47.184704 1019794 fix.go:102] recreateIfNeeded on missing-upgrade-200025: state= err=unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:47.184736 1019794 fix.go:107] machineExists: false. err=machine does not exist
	I0717 20:47:47.187359 1019794 out.go:177] * docker "missing-upgrade-200025" container is missing, will recreate.
	I0717 20:47:47.189604 1019794 delete.go:124] DEMOLISHING missing-upgrade-200025 ...
	I0717 20:47:47.189710 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:47.208595 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	W0717 20:47:47.208659 1019794 stop.go:75] unable to get state: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:47.208687 1019794 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:47.209226 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:47.227279 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:47.227345 1019794 delete.go:82] Unable to get host status for missing-upgrade-200025, assuming it has already been deleted: state: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:47.227416 1019794 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-200025
	W0717 20:47:47.245192 1019794 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-200025 returned with exit code 1
	I0717 20:47:47.245231 1019794 kic.go:367] could not find the container missing-upgrade-200025 to remove it. will try anyways
	I0717 20:47:47.245291 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:47.265814 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	W0717 20:47:47.265873 1019794 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:47.266056 1019794 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-200025 /bin/bash -c "sudo init 0"
	W0717 20:47:47.283278 1019794 cli_runner.go:211] docker exec --privileged -t missing-upgrade-200025 /bin/bash -c "sudo init 0" returned with exit code 1
	I0717 20:47:47.283317 1019794 oci.go:647] error shutdown missing-upgrade-200025: docker exec --privileged -t missing-upgrade-200025 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:48.284898 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:48.319446 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:48.319531 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:48.319542 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:47:48.319570 1019794 retry.go:31] will retry after 706.709242ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:49.026486 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:49.085008 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:49.085077 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:49.085088 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:47:49.085112 1019794 retry.go:31] will retry after 634.269818ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:49.719571 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:49.741972 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:49.742040 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:49.742051 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:47:49.742074 1019794 retry.go:31] will retry after 673.141741ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:50.415528 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:50.437663 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:50.437729 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:50.437744 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:47:50.437768 1019794 retry.go:31] will retry after 2.406066342s: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:52.845015 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:52.863578 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:52.863643 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:52.863653 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:47:52.863677 1019794 retry.go:31] will retry after 3.243393869s: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:56.107409 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:47:56.124317 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:47:56.124389 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:47:56.124403 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:47:56.124431 1019794 retry.go:31] will retry after 4.359611402s: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:48:00.485059 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:48:00.504226 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:48:00.504293 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:48:00.504308 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:48:00.504334 1019794 retry.go:31] will retry after 5.337095807s: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:48:05.841970 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:48:05.858874 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:48:05.858937 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:48:05.858947 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:48:05.858977 1019794 oci.go:88] couldn't shut down missing-upgrade-200025 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	 
	I0717 20:48:05.859033 1019794 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-200025
	I0717 20:48:05.875880 1019794 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-200025
	W0717 20:48:05.892605 1019794 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-200025 returned with exit code 1
	I0717 20:48:05.892708 1019794 cli_runner.go:164] Run: docker network inspect missing-upgrade-200025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:48:05.911201 1019794 cli_runner.go:164] Run: docker network rm missing-upgrade-200025
	I0717 20:48:06.021074 1019794 fix.go:114] Sleeping 1 second for extra luck!
	I0717 20:48:07.022059 1019794 start.go:125] createHost starting for "" (driver="docker")
	I0717 20:48:07.025932 1019794 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 20:48:07.026089 1019794 start.go:159] libmachine.API.Create for "missing-upgrade-200025" (driver="docker")
	I0717 20:48:07.026116 1019794 client.go:168] LocalClient.Create starting
	I0717 20:48:07.026205 1019794 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem
	I0717 20:48:07.026245 1019794 main.go:141] libmachine: Decoding PEM data...
	I0717 20:48:07.026266 1019794 main.go:141] libmachine: Parsing certificate...
	I0717 20:48:07.026327 1019794 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem
	I0717 20:48:07.026350 1019794 main.go:141] libmachine: Decoding PEM data...
	I0717 20:48:07.026364 1019794 main.go:141] libmachine: Parsing certificate...
	I0717 20:48:07.026625 1019794 cli_runner.go:164] Run: docker network inspect missing-upgrade-200025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 20:48:07.044017 1019794 cli_runner.go:211] docker network inspect missing-upgrade-200025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 20:48:07.044099 1019794 network_create.go:281] running [docker network inspect missing-upgrade-200025] to gather additional debugging logs...
	I0717 20:48:07.044122 1019794 cli_runner.go:164] Run: docker network inspect missing-upgrade-200025
	W0717 20:48:07.061316 1019794 cli_runner.go:211] docker network inspect missing-upgrade-200025 returned with exit code 1
	I0717 20:48:07.061351 1019794 network_create.go:284] error running [docker network inspect missing-upgrade-200025]: docker network inspect missing-upgrade-200025: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-200025 not found
	I0717 20:48:07.061364 1019794 network_create.go:286] output of [docker network inspect missing-upgrade-200025]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-200025 not found
	
	** /stderr **
	I0717 20:48:07.061431 1019794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:48:07.079367 1019794 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0b860b1c7272 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c8:5f:d0:1e} reservation:<nil>}
	I0717 20:48:07.079737 1019794 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-943d483836f4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:b0:a7:a9:78} reservation:<nil>}
	I0717 20:48:07.080100 1019794 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8cd76a96833e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:11:cc:bf:5d} reservation:<nil>}
	I0717 20:48:07.080533 1019794 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000faf180}
	I0717 20:48:07.080558 1019794 network_create.go:123] attempt to create docker network missing-upgrade-200025 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0717 20:48:07.080619 1019794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-200025 missing-upgrade-200025
	I0717 20:48:07.154974 1019794 network_create.go:107] docker network missing-upgrade-200025 192.168.76.0/24 created
	I0717 20:48:07.155006 1019794 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-200025" container
	I0717 20:48:07.155146 1019794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 20:48:07.172781 1019794 cli_runner.go:164] Run: docker volume create missing-upgrade-200025 --label name.minikube.sigs.k8s.io=missing-upgrade-200025 --label created_by.minikube.sigs.k8s.io=true
	I0717 20:48:07.191816 1019794 oci.go:103] Successfully created a docker volume missing-upgrade-200025
	I0717 20:48:07.191914 1019794 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-200025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-200025 --entrypoint /usr/bin/test -v missing-upgrade-200025:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0717 20:48:07.694716 1019794 oci.go:107] Successfully prepared a docker volume missing-upgrade-200025
	I0717 20:48:07.694757 1019794 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0717 20:48:07.694778 1019794 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 20:48:07.694868 1019794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-200025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 20:48:13.286911 1019794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-200025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.591986982s)
	I0717 20:48:13.286945 1019794 kic.go:199] duration metric: took 5.592162 seconds to extract preloaded images to volume
	W0717 20:48:13.287090 1019794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 20:48:13.287204 1019794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 20:48:13.372161 1019794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-200025 --name missing-upgrade-200025 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-200025 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-200025 --network missing-upgrade-200025 --ip 192.168.76.2 --volume missing-upgrade-200025:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0717 20:48:13.736345 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Running}}
	I0717 20:48:13.755931 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	I0717 20:48:13.778092 1019794 cli_runner.go:164] Run: docker exec missing-upgrade-200025 stat /var/lib/dpkg/alternatives/iptables
	I0717 20:48:13.875609 1019794 oci.go:144] the created container "missing-upgrade-200025" has a running status.
	I0717 20:48:13.875635 1019794 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa...
	I0717 20:48:14.749263 1019794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 20:48:14.784944 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	I0717 20:48:14.814263 1019794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 20:48:14.814287 1019794 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-200025 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 20:48:14.894327 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	I0717 20:48:14.920501 1019794 machine.go:88] provisioning docker machine ...
	I0717 20:48:14.922319 1019794 ubuntu.go:169] provisioning hostname "missing-upgrade-200025"
	I0717 20:48:14.922388 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:14.948251 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:14.948728 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:14.948743 1019794 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-200025 && echo "missing-upgrade-200025" | sudo tee /etc/hostname
	I0717 20:48:15.137799 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-200025
	
	I0717 20:48:15.137943 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:15.178177 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:15.178652 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:15.178679 1019794 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-200025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-200025/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-200025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:48:15.318341 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:48:15.318370 1019794 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:48:15.318405 1019794 ubuntu.go:177] setting up certificates
	I0717 20:48:15.318414 1019794 provision.go:83] configureAuth start
	I0717 20:48:15.318480 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:15.343398 1019794 provision.go:138] copyHostCerts
	I0717 20:48:15.343469 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem, removing ...
	I0717 20:48:15.343481 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:48:15.343558 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:48:15.343657 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem, removing ...
	I0717 20:48:15.343668 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:48:15.343699 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:48:15.343761 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem, removing ...
	I0717 20:48:15.343768 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:48:15.343794 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:48:15.343846 1019794 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-200025 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-200025]
	I0717 20:48:15.733322 1019794 provision.go:172] copyRemoteCerts
	I0717 20:48:15.733419 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:48:15.733465 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:15.752261 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:15.842483 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:48:15.867699 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 20:48:15.891567 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 20:48:15.915006 1019794 provision.go:86] duration metric: configureAuth took 596.576101ms
	I0717 20:48:15.915030 1019794 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:48:15.915222 1019794 config.go:182] Loaded profile config "missing-upgrade-200025": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 20:48:15.915229 1019794 machine.go:91] provisioned docker machine in 992.931862ms
	I0717 20:48:15.915242 1019794 client.go:171] LocalClient.Create took 8.889120841s
	I0717 20:48:15.915256 1019794 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-200025" took 8.889167569s
	I0717 20:48:15.915264 1019794 start.go:300] post-start starting for "missing-upgrade-200025" (driver="docker")
	I0717 20:48:15.915272 1019794 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:48:15.915320 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:48:15.915358 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:15.933380 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:16.023055 1019794 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:48:16.027204 1019794 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:48:16.027246 1019794 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:48:16.027260 1019794 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:48:16.027268 1019794 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0717 20:48:16.027278 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:48:16.027346 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:48:16.027437 1019794 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> 9039972.pem in /etc/ssl/certs
	I0717 20:48:16.027557 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:48:16.037225 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:48:16.061420 1019794 start.go:303] post-start completed in 146.142001ms
	I0717 20:48:16.061878 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:16.084145 1019794 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/missing-upgrade-200025/config.json ...
	I0717 20:48:16.084448 1019794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:48:16.084501 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:16.107523 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:16.195454 1019794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:48:16.201210 1019794 start.go:128] duration metric: createHost completed in 9.179075584s
	I0717 20:48:16.201308 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:48:16.219541 1019794 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 20:48:16.219572 1019794 machine.go:88] provisioning docker machine ...
	I0717 20:48:16.219590 1019794 ubuntu.go:169] provisioning hostname "missing-upgrade-200025"
	I0717 20:48:16.219659 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:16.238055 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:16.238495 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:16.238515 1019794 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-200025 && echo "missing-upgrade-200025" | sudo tee /etc/hostname
	I0717 20:48:16.377149 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-200025
	
	I0717 20:48:16.377301 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:16.397495 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:16.398095 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:16.398127 1019794 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-200025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-200025/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-200025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:48:16.525936 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:48:16.525960 1019794 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:48:16.525985 1019794 ubuntu.go:177] setting up certificates
	I0717 20:48:16.525994 1019794 provision.go:83] configureAuth start
	I0717 20:48:16.526056 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:16.545124 1019794 provision.go:138] copyHostCerts
	I0717 20:48:16.545199 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem, removing ...
	I0717 20:48:16.545212 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:48:16.545287 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:48:16.545382 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem, removing ...
	I0717 20:48:16.545391 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:48:16.545418 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:48:16.545479 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem, removing ...
	I0717 20:48:16.545489 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:48:16.545517 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:48:16.545568 1019794 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-200025 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-200025]
	I0717 20:48:17.140969 1019794 provision.go:172] copyRemoteCerts
	I0717 20:48:17.141023 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:48:17.141070 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.165895 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.259396 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 20:48:17.295628 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 20:48:17.327805 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:48:17.364407 1019794 provision.go:86] duration metric: configureAuth took 838.401593ms
	I0717 20:48:17.364439 1019794 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:48:17.364687 1019794 config.go:182] Loaded profile config "missing-upgrade-200025": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 20:48:17.364699 1019794 machine.go:91] provisioned docker machine in 1.145121405s
	I0717 20:48:17.364713 1019794 start.go:300] post-start starting for "missing-upgrade-200025" (driver="docker")
	I0717 20:48:17.364730 1019794 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:48:17.364787 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:48:17.364826 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.398242 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.506108 1019794 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:48:17.511482 1019794 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:48:17.511526 1019794 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:48:17.511538 1019794 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:48:17.511545 1019794 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0717 20:48:17.511561 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:48:17.511650 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:48:17.511744 1019794 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> 9039972.pem in /etc/ssl/certs
	I0717 20:48:17.511887 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:48:17.529904 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:48:17.563399 1019794 start.go:303] post-start completed in 198.658807ms
	I0717 20:48:17.563491 1019794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:48:17.563562 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.595269 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.701473 1019794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:48:17.713551 1019794 fix.go:56] fixHost completed within 30.556144488s
	I0717 20:48:17.713578 1019794 start.go:83] releasing machines lock for "missing-upgrade-200025", held for 30.556191298s
	I0717 20:48:17.713658 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:17.751694 1019794 ssh_runner.go:195] Run: cat /version.json
	I0717 20:48:17.751760 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.751699 1019794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:48:17.751922 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.803472 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.814137 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	W0717 20:48:17.916095 1019794 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 20:48:17.916258 1019794 ssh_runner.go:195] Run: systemctl --version
	I0717 20:48:18.127059 1019794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 20:48:18.134550 1019794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 20:48:18.215450 1019794 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 20:48:18.215551 1019794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:48:18.274735 1019794 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 20:48:18.274761 1019794 start.go:469] detecting cgroup driver to use...
	I0717 20:48:18.274815 1019794 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 20:48:18.274886 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 20:48:18.305757 1019794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 20:48:18.328351 1019794 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:48:18.328409 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:48:18.354706 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:48:18.376193 1019794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 20:48:18.396999 1019794 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 20:48:18.397071 1019794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:48:18.655716 1019794 docker.go:212] disabling docker service ...
	I0717 20:48:18.655871 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:48:18.703628 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:48:18.719804 1019794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:48:18.867688 1019794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:48:19.186409 1019794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:48:19.205539 1019794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:48:19.243103 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0717 20:48:19.285211 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 20:48:19.335592 1019794 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 20:48:19.335692 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 20:48:19.385837 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:48:19.427462 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 20:48:19.447379 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:48:19.462194 1019794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:48:19.476463 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 20:48:19.491579 1019794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:48:19.505697 1019794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:48:19.515712 1019794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:48:19.687280 1019794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:48:19.883714 1019794 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 20:48:19.883781 1019794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 20:48:19.900093 1019794 start.go:537] Will wait 60s for crictl version
	I0717 20:48:19.900219 1019794 ssh_runner.go:195] Run: which crictl
	I0717 20:48:19.915529 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:20.034148 1019794 retry.go:31] will retry after 8.886676059s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:48:20Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 20:48:28.923834 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:28.952820 1019794 retry.go:31] will retry after 12.265934816s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:48:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 20:48:41.218986 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:41.250390 1019794 retry.go:31] will retry after 27.521724164s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:48:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 20:49:08.775473 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:49:08.810084 1019794 out.go:177] 
	W0717 20:49:08.813462 1019794 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:08Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:08Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0717 20:49:08.813495 1019794 out.go:239] * 
	* 
	W0717 20:49:08.815867 1019794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 20:49:08.818587 1019794 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.22.0. args: out/minikube-linux-arm64 start -p missing-upgrade-200025 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-07-17 20:49:08.862425032 +0000 UTC m=+2074.555907290
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-200025
helpers_test.go:235: (dbg) docker inspect missing-upgrade-200025:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aaa482aa0d2cf74f72fac2798cf3cdf97b838e929287015917663ab2f826f0fc",
	        "Created": "2023-07-17T20:48:13.388800292Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1021921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:48:13.727838621Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/aaa482aa0d2cf74f72fac2798cf3cdf97b838e929287015917663ab2f826f0fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aaa482aa0d2cf74f72fac2798cf3cdf97b838e929287015917663ab2f826f0fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/aaa482aa0d2cf74f72fac2798cf3cdf97b838e929287015917663ab2f826f0fc/hosts",
	        "LogPath": "/var/lib/docker/containers/aaa482aa0d2cf74f72fac2798cf3cdf97b838e929287015917663ab2f826f0fc/aaa482aa0d2cf74f72fac2798cf3cdf97b838e929287015917663ab2f826f0fc-json.log",
	        "Name": "/missing-upgrade-200025",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-200025:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-200025",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85511c85a5539bd1920d2807864f4a3e35dbf8f9f652d2cd5534a27270a645bf-init/diff:/var/lib/docker/overlay2/e5e7781979b407246c74240d334737dbe73f793fa69b04bc6478667a323062ef/diff:/var/lib/docker/overlay2/b59a950befb9a1b3496a0dabd6de405dc66549f856aacae61484177a102dafb9/diff:/var/lib/docker/overlay2/f169eddd090c1b1ceda3c4b21c1885f75aebcfc14b2e4911f2697d683f15c2ee/diff:/var/lib/docker/overlay2/7069960a07f15dd7bd58f7fe731baa1877b886ba0252c9314ab087ff083b41c9/diff:/var/lib/docker/overlay2/6bb8c7c80fd26366e8250f3acc660b2bd7ba830bafab9ef31d1938df8828c593/diff:/var/lib/docker/overlay2/9d5056d5750432b989e3cd909879f2769790aa93a04845a1fd1806a3a2041ff1/diff:/var/lib/docker/overlay2/04143a81573cc5b2843133aedb852bdad12c33ca02a7ad5f7e7c269ac78b2bbb/diff:/var/lib/docker/overlay2/15663e7fe150de3b02d942842ecad585463621e472034565bfee69f97e8ac7cf/diff:/var/lib/docker/overlay2/d2b5e6f70de8398b5b7c7812c8c5bdf2b8d1d7a18c773e6f49f28ed0f4fff729/diff:/var/lib/docker/overlay2/80bc11
2c1ae22882a19f8c96aab45d30def5d4c6a8c30de7d147c74ccb5aeaf0/diff:/var/lib/docker/overlay2/27c1a6e3fa9466bc74522ebe635143ad25554bf689240929ca58a6c62de7ca79/diff:/var/lib/docker/overlay2/8813555ce787d39b0a2dfb27492b13a4b2feced7c37c1476db09ce909f4bbdcc/diff:/var/lib/docker/overlay2/bd899ce1e574f31de1304b4bca49d55c51efe7b8db72d64a9ccbcc5f4ae2e826/diff:/var/lib/docker/overlay2/acb58b47535362beee39f955dddcc46a91ef759d4b1b7e32091c960db477d211/diff:/var/lib/docker/overlay2/c4931f3bff06efcf71255e54048328b273e5467ceeadb86a9d14366a80707f81/diff:/var/lib/docker/overlay2/c4dacb883b927dab3284a6ee45320d6c4d2b21deb1c25fa0038da6446384269a/diff:/var/lib/docker/overlay2/b501c2bb14117ffdb996dd1c3b3476ab04a1a33a7233b4439503243af3062308/diff:/var/lib/docker/overlay2/f37a094eea44588ff8bf5743db603ef1529cbc2bd49929737b4e087474504ede/diff:/var/lib/docker/overlay2/32de038fee4845676945041cc5986be71a094d2173ab370eefcce4caa905680b/diff:/var/lib/docker/overlay2/25dfef1e3277367ca75944fee706579c681e4cad079b64c7adecd8c834ffcf1b/diff:/var/lib/d
ocker/overlay2/bd529f93291cb31bb2e1ec3b3dc9d97284bf5c839972548d7125ed57fa5b79a0/diff:/var/lib/docker/overlay2/a014c5fac1a8b4bb38d16cde5970504466055b81354a61ae060b75dae3887c03/diff:/var/lib/docker/overlay2/6a07828b45e06793ab955ac5556701874600353d0231a567bf2ea959fcca4c4d/diff:/var/lib/docker/overlay2/4d7a32bddb8064753ff3b5ba22c1f3e4201912c24b9a12162313b2aa69cb69e3/diff:/var/lib/docker/overlay2/6e15959d820ee940c029368709ae11df37117f962c13ee5486552313e1b7ec7e/diff:/var/lib/docker/overlay2/4183bffea5e189a74f5522994f9a9415564dbb5fce0ea7df6d4e0489b5782783/diff:/var/lib/docker/overlay2/eaaa25e119d236c3adbb6185fba7c960e2240b26edecde225d12d028d2e495fb/diff:/var/lib/docker/overlay2/60a8aa0f7a7557d2f609881dfdd649ff8c9a1b5c7a9713155a6bccb494a7dd92/diff:/var/lib/docker/overlay2/ff0a9738b65b1de8cefe7d412769e6fdb185a6d5f78c2961bbced57e6f4b12a6/diff:/var/lib/docker/overlay2/e3b536211bd3759cfcf5e6327666578067a1cdd0c466fb5b0d31732f36c7207b/diff:/var/lib/docker/overlay2/6582177e755b72fb12fb5f5200b48768006f9b58258fdecaae2fcb1a32c
6f5f7/diff:/var/lib/docker/overlay2/e04fb9754d8df01bbde57af88dc49163ec79c888d1c30c2d0cf1fba54278f6e3/diff:/var/lib/docker/overlay2/f6bbcb46cfb8993df0518afd75d442b5a6ed1ff7609f75e1b91e4127e141bb50/diff:/var/lib/docker/overlay2/50731150aac51d1ced913559efa0c1706a5d14211296fbaefa0a8a0c610981c4/diff:/var/lib/docker/overlay2/aa1b8f6043f48bc1066537057129b285d1f6054eb1d5073eb679f55f43f59f58/diff:/var/lib/docker/overlay2/35e0336364ad8b2d69c0b0761b5f6d4ab4d5ac976d10db110aead51fdaabe167/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85511c85a5539bd1920d2807864f4a3e35dbf8f9f652d2cd5534a27270a645bf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85511c85a5539bd1920d2807864f4a3e35dbf8f9f652d2cd5534a27270a645bf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85511c85a5539bd1920d2807864f4a3e35dbf8f9f652d2cd5534a27270a645bf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-200025",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-200025/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-200025",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-200025",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-200025",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efd20de96b1023832227bb27f82c7f310c147fc1afbc5d8f5af11eaa6df3c3fa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33895"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33894"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33891"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33893"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33892"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/efd20de96b10",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-200025": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "aaa482aa0d2c",
	                        "missing-upgrade-200025"
	                    ],
	                    "NetworkID": "f841bddd82dcf706596e86c3d935e2582cb0ff7337f6e5914a5952816e2f4332",
	                    "EndpointID": "219f9edceb710e4be68ae66377b01afb907129543751ab8f48275c2657a1c79d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-200025 -n missing-upgrade-200025
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-200025 -n missing-upgrade-200025: exit status 2 (342.44916ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMissingContainerUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMissingContainerUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p missing-upgrade-200025 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p missing-upgrade-200025 logs -n 25: (1.355382707s)
helpers_test.go:252: TestMissingContainerUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |              Args              |           Profile           |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------|-----------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:43 UTC           |                               |
	|         | --schedule 15s                 |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:43 UTC           |                               |
	|         | --schedule 15s                 |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:43 UTC           |                               |
	|         | --schedule 15s                 |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:43 UTC           | 17 Jul 23 20:43 UTC           |
	|         | --cancel-scheduled             |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:44 UTC           |                               |
	|         | --schedule 15s                 |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:44 UTC           |                               |
	|         | --schedule 15s                 |                             |         |         |                               |                               |
	| stop    | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:44 UTC           | 17 Jul 23 20:44 UTC           |
	|         | --schedule 15s                 |                             |         |         |                               |                               |
	| delete  | -p scheduled-stop-183475       | scheduled-stop-183475       | jenkins | v1.30.1 | 17 Jul 23 20:44 UTC           | 17 Jul 23 20:44 UTC           |
	| start   | -p insufficient-storage-446661 | insufficient-storage-446661 | jenkins | v1.30.1 | 17 Jul 23 20:44 UTC           |                               |
	|         | --memory=2048 --output=json    |                             |         |         |                               |                               |
	|         | --wait=true --driver=docker    |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| delete  | -p insufficient-storage-446661 | insufficient-storage-446661 | jenkins | v1.30.1 | 17 Jul 23 20:45 UTC           | 17 Jul 23 20:45 UTC           |
	| start   | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:45 UTC           |                               |
	|         | --no-kubernetes                |                             |         |         |                               |                               |
	|         | --kubernetes-version=1.20      |                             |         |         |                               |                               |
	|         | --driver=docker                |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| start   | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:45 UTC           | 17 Jul 23 20:45 UTC           |
	|         | --driver=docker                |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| start   | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:45 UTC           | 17 Jul 23 20:46 UTC           |
	|         | --no-kubernetes                |                             |         |         |                               |                               |
	|         | --driver=docker                |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| delete  | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           | 17 Jul 23 20:46 UTC           |
	| start   | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           | 17 Jul 23 20:46 UTC           |
	|         | --no-kubernetes                |                             |         |         |                               |                               |
	|         | --driver=docker                |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| ssh     | -p NoKubernetes-476088 sudo    | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           |                               |
	|         | systemctl is-active --quiet    |                             |         |         |                               |                               |
	|         | service kubelet                |                             |         |         |                               |                               |
	| stop    | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           | 17 Jul 23 20:46 UTC           |
	| start   | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           | 17 Jul 23 20:46 UTC           |
	|         | --driver=docker                |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| ssh     | -p NoKubernetes-476088 sudo    | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           |                               |
	|         | systemctl is-active --quiet    |                             |         |         |                               |                               |
	|         | service kubelet                |                             |         |         |                               |                               |
	| delete  | -p NoKubernetes-476088         | NoKubernetes-476088         | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           | 17 Jul 23 20:46 UTC           |
	| start   | -p kubernetes-upgrade-854415   | kubernetes-upgrade-854415   | jenkins | v1.30.1 | 17 Jul 23 20:46 UTC           | 17 Jul 23 20:47 UTC           |
	|         | --memory=2200                  |                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                               |                               |
	|         | --alsologtostderr              |                             |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| start   | -p missing-upgrade-200025      | missing-upgrade-200025      | jenkins | v1.22.0 | Mon, 17 Jul 2023 20:45:09 UTC | Mon, 17 Jul 2023 20:47:24 UTC |
	|         | --memory=2200 --driver=docker  |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| start   | -p missing-upgrade-200025      | missing-upgrade-200025      | jenkins | v1.30.1 | 17 Jul 23 20:47 UTC           |                               |
	|         | --memory=2200                  |                             |         |         |                               |                               |
	|         | --alsologtostderr              |                             |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	| stop    | -p kubernetes-upgrade-854415   | kubernetes-upgrade-854415   | jenkins | v1.30.1 | 17 Jul 23 20:47 UTC           | 17 Jul 23 20:48 UTC           |
	| start   | -p kubernetes-upgrade-854415   | kubernetes-upgrade-854415   | jenkins | v1.30.1 | 17 Jul 23 20:48 UTC           |                               |
	|         | --memory=2200                  |                             |         |         |                               |                               |
	|         | --kubernetes-version=v1.27.3   |                             |         |         |                               |                               |
	|         | --alsologtostderr              |                             |         |         |                               |                               |
	|         | -v=1 --driver=docker           |                             |         |         |                               |                               |
	|         | --container-runtime=containerd |                             |         |         |                               |                               |
	|---------|--------------------------------|-----------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:48:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:48:04.154667 1020628 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:48:04.154884 1020628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:48:04.154896 1020628 out.go:309] Setting ErrFile to fd 2...
	I0717 20:48:04.154902 1020628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:48:04.155204 1020628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:48:04.155612 1020628 out.go:303] Setting JSON to false
	I0717 20:48:04.156542 1020628 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16232,"bootTime":1689610653,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:48:04.156606 1020628 start.go:138] virtualization:  
	I0717 20:48:04.159111 1020628 out.go:177] * [kubernetes-upgrade-854415] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:48:04.161415 1020628 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:48:04.163046 1020628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:48:04.161576 1020628 notify.go:220] Checking for updates...
	I0717 20:48:04.166356 1020628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:48:04.168324 1020628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:48:04.170007 1020628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:48:04.171679 1020628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:48:04.173693 1020628 config.go:182] Loaded profile config "kubernetes-upgrade-854415": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0717 20:48:04.174279 1020628 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:48:04.198444 1020628 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:48:04.198552 1020628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:48:04.303470 1020628 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 20:48:04.293885424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:48:04.303573 1020628 docker.go:294] overlay module found
	I0717 20:48:04.306790 1020628 out.go:177] * Using the docker driver based on existing profile
	I0717 20:48:04.308823 1020628 start.go:298] selected driver: docker
	I0717 20:48:04.308843 1020628 start.go:880] validating driver "docker" against &{Name:kubernetes-upgrade-854415 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-854415 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:48:04.308984 1020628 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:48:04.309579 1020628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:48:04.378862 1020628 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-07-17 20:48:04.369363941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:48:04.379218 1020628 cni.go:84] Creating CNI manager for ""
	I0717 20:48:04.379230 1020628 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:48:04.379241 1020628 start_flags.go:319] config:
	{Name:kubernetes-upgrade-854415 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-854415 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:48:04.381231 1020628 out.go:177] * Starting control plane node kubernetes-upgrade-854415 in cluster kubernetes-upgrade-854415
	I0717 20:48:04.383160 1020628 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 20:48:04.385083 1020628 out.go:177] * Pulling base image ...
	I0717 20:48:04.387178 1020628 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:48:04.387235 1020628 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	I0717 20:48:04.387249 1020628 cache.go:57] Caching tarball of preloaded images
	I0717 20:48:04.387278 1020628 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 20:48:04.387334 1020628 preload.go:174] Found /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0717 20:48:04.387344 1020628 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on containerd
	I0717 20:48:04.387494 1020628 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/config.json ...
	I0717 20:48:04.405884 1020628 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 20:48:04.405909 1020628 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 20:48:04.405930 1020628 cache.go:195] Successfully downloaded all kic artifacts
	I0717 20:48:04.405974 1020628 start.go:365] acquiring machines lock for kubernetes-upgrade-854415: {Name:mk56b197182206dcdf3f137b75be278fbed3e385 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 20:48:04.406051 1020628 start.go:369] acquired machines lock for "kubernetes-upgrade-854415" in 56.017µs
	I0717 20:48:04.406072 1020628 start.go:96] Skipping create...Using existing machine configuration
	I0717 20:48:04.406089 1020628 fix.go:54] fixHost starting: 
	I0717 20:48:04.406364 1020628 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-854415 --format={{.State.Status}}
	I0717 20:48:04.424393 1020628 fix.go:102] recreateIfNeeded on kubernetes-upgrade-854415: state=Stopped err=<nil>
	W0717 20:48:04.424432 1020628 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 20:48:04.426778 1020628 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-854415" ...
	I0717 20:48:00.485059 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:48:00.504226 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:48:00.504293 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:48:00.504308 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:48:00.504334 1019794 retry.go:31] will retry after 5.337095807s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:48:05.841970 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:48:05.858874 1019794 cli_runner.go:211] docker container inspect missing-upgrade-200025 --format={{.State.Status}} returned with exit code 1
	I0717 20:48:05.858937 1019794 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	I0717 20:48:05.858947 1019794 oci.go:661] temporary error: container missing-upgrade-200025 status is  but expect it to be exited
	I0717 20:48:05.858977 1019794 oci.go:88] couldn't shut down missing-upgrade-200025 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-200025": docker container inspect missing-upgrade-200025 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-200025
	 
	I0717 20:48:05.859033 1019794 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-200025
	I0717 20:48:05.875880 1019794 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-200025
	W0717 20:48:05.892605 1019794 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-200025 returned with exit code 1
	I0717 20:48:05.892708 1019794 cli_runner.go:164] Run: docker network inspect missing-upgrade-200025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:48:05.911201 1019794 cli_runner.go:164] Run: docker network rm missing-upgrade-200025
	I0717 20:48:06.021074 1019794 fix.go:114] Sleeping 1 second for extra luck!
	I0717 20:48:07.022059 1019794 start.go:125] createHost starting for "" (driver="docker")
	I0717 20:48:04.428620 1020628 cli_runner.go:164] Run: docker start kubernetes-upgrade-854415
	I0717 20:48:04.767005 1020628 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-854415 --format={{.State.Status}}
	I0717 20:48:04.797651 1020628 kic.go:426] container "kubernetes-upgrade-854415" state is running.
	I0717 20:48:04.798030 1020628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-854415
	I0717 20:48:04.819754 1020628 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/config.json ...
	I0717 20:48:04.819985 1020628 machine.go:88] provisioning docker machine ...
	I0717 20:48:04.820000 1020628 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-854415"
	I0717 20:48:04.820054 1020628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854415
	I0717 20:48:04.840828 1020628 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:04.841335 1020628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33890 <nil> <nil>}
	I0717 20:48:04.841350 1020628 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-854415 && echo "kubernetes-upgrade-854415" | sudo tee /etc/hostname
	I0717 20:48:04.841888 1020628 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54352->127.0.0.1:33890: read: connection reset by peer
	I0717 20:48:08.015465 1020628 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-854415
	
	I0717 20:48:08.015612 1020628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854415
	I0717 20:48:08.041154 1020628 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:08.041618 1020628 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33890 <nil> <nil>}
	I0717 20:48:08.041644 1020628 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-854415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-854415/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-854415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:48:08.190488 1020628 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:48:08.190574 1020628 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:48:08.190631 1020628 ubuntu.go:177] setting up certificates
	I0717 20:48:08.190657 1020628 provision.go:83] configureAuth start
	I0717 20:48:08.190744 1020628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-854415
	I0717 20:48:08.220668 1020628 provision.go:138] copyHostCerts
	I0717 20:48:08.220736 1020628 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem, removing ...
	I0717 20:48:08.220746 1020628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:48:08.220822 1020628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:48:08.221107 1020628 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem, removing ...
	I0717 20:48:08.221117 1020628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:48:08.221153 1020628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:48:08.221210 1020628 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem, removing ...
	I0717 20:48:08.221215 1020628 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:48:08.221241 1020628 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:48:08.221284 1020628 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-854415 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-854415]
	I0717 20:48:08.719034 1020628 provision.go:172] copyRemoteCerts
	I0717 20:48:08.719151 1020628 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:48:08.719218 1020628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854415
	I0717 20:48:08.738348 1020628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33890 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/kubernetes-upgrade-854415/id_rsa Username:docker}
	I0717 20:48:08.844713 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 20:48:08.877712 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:48:08.910035 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 20:48:08.942329 1020628 provision.go:86] duration metric: configureAuth took 751.645629ms
	I0717 20:48:08.942358 1020628 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:48:08.942593 1020628 config.go:182] Loaded profile config "kubernetes-upgrade-854415": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:48:08.942609 1020628 machine.go:91] provisioned docker machine in 4.122616404s
	I0717 20:48:08.942617 1020628 start.go:300] post-start starting for "kubernetes-upgrade-854415" (driver="docker")
	I0717 20:48:08.942650 1020628 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:48:08.942726 1020628 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:48:08.942789 1020628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854415
	I0717 20:48:08.966367 1020628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33890 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/kubernetes-upgrade-854415/id_rsa Username:docker}
	I0717 20:48:09.077068 1020628 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:48:09.081940 1020628 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:48:09.081988 1020628 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:48:09.082000 1020628 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:48:09.082012 1020628 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 20:48:09.082023 1020628 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:48:09.082092 1020628 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:48:09.082177 1020628 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> 9039972.pem in /etc/ssl/certs
	I0717 20:48:09.082300 1020628 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:48:09.095252 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:48:09.129651 1020628 start.go:303] post-start completed in 186.995684ms
	I0717 20:48:09.129799 1020628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:48:09.129879 1020628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854415
	I0717 20:48:09.154346 1020628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33890 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/kubernetes-upgrade-854415/id_rsa Username:docker}
	I0717 20:48:07.025932 1019794 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 20:48:07.026089 1019794 start.go:159] libmachine.API.Create for "missing-upgrade-200025" (driver="docker")
	I0717 20:48:07.026116 1019794 client.go:168] LocalClient.Create starting
	I0717 20:48:07.026205 1019794 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem
	I0717 20:48:07.026245 1019794 main.go:141] libmachine: Decoding PEM data...
	I0717 20:48:07.026266 1019794 main.go:141] libmachine: Parsing certificate...
	I0717 20:48:07.026327 1019794 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem
	I0717 20:48:07.026350 1019794 main.go:141] libmachine: Decoding PEM data...
	I0717 20:48:07.026364 1019794 main.go:141] libmachine: Parsing certificate...
	I0717 20:48:07.026625 1019794 cli_runner.go:164] Run: docker network inspect missing-upgrade-200025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 20:48:07.044017 1019794 cli_runner.go:211] docker network inspect missing-upgrade-200025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 20:48:07.044099 1019794 network_create.go:281] running [docker network inspect missing-upgrade-200025] to gather additional debugging logs...
	I0717 20:48:07.044122 1019794 cli_runner.go:164] Run: docker network inspect missing-upgrade-200025
	W0717 20:48:07.061316 1019794 cli_runner.go:211] docker network inspect missing-upgrade-200025 returned with exit code 1
	I0717 20:48:07.061351 1019794 network_create.go:284] error running [docker network inspect missing-upgrade-200025]: docker network inspect missing-upgrade-200025: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-200025 not found
	I0717 20:48:07.061364 1019794 network_create.go:286] output of [docker network inspect missing-upgrade-200025]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-200025 not found
	
	** /stderr **
	I0717 20:48:07.061431 1019794 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:48:07.079367 1019794 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0b860b1c7272 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c8:5f:d0:1e} reservation:<nil>}
	I0717 20:48:07.079737 1019794 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-943d483836f4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:b0:a7:a9:78} reservation:<nil>}
	I0717 20:48:07.080100 1019794 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8cd76a96833e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:11:cc:bf:5d} reservation:<nil>}
	I0717 20:48:07.080533 1019794 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000faf180}
	I0717 20:48:07.080558 1019794 network_create.go:123] attempt to create docker network missing-upgrade-200025 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0717 20:48:07.080619 1019794 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-200025 missing-upgrade-200025
	I0717 20:48:07.154974 1019794 network_create.go:107] docker network missing-upgrade-200025 192.168.76.0/24 created
	I0717 20:48:07.155006 1019794 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-200025" container
	I0717 20:48:07.155146 1019794 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 20:48:07.172781 1019794 cli_runner.go:164] Run: docker volume create missing-upgrade-200025 --label name.minikube.sigs.k8s.io=missing-upgrade-200025 --label created_by.minikube.sigs.k8s.io=true
	I0717 20:48:07.191816 1019794 oci.go:103] Successfully created a docker volume missing-upgrade-200025
	I0717 20:48:07.191914 1019794 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-200025-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-200025 --entrypoint /usr/bin/test -v missing-upgrade-200025:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0717 20:48:07.694716 1019794 oci.go:107] Successfully prepared a docker volume missing-upgrade-200025
	I0717 20:48:07.694757 1019794 preload.go:132] Checking if preload exists for k8s version v1.21.2 and runtime containerd
	I0717 20:48:07.694778 1019794 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 20:48:07.694868 1019794 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-200025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 20:48:09.252746 1020628 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:48:09.260006 1020628 fix.go:56] fixHost completed within 4.853920056s
	I0717 20:48:09.260036 1020628 start.go:83] releasing machines lock for "kubernetes-upgrade-854415", held for 4.853975499s
	I0717 20:48:09.260109 1020628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-854415
	I0717 20:48:09.282719 1020628 ssh_runner.go:195] Run: cat /version.json
	I0717 20:48:09.282785 1020628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854415
	I0717 20:48:09.283014 1020628 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:48:09.283079 1020628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-854415
	I0717 20:48:09.320236 1020628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33890 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/kubernetes-upgrade-854415/id_rsa Username:docker}
	I0717 20:48:09.330905 1020628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33890 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/kubernetes-upgrade-854415/id_rsa Username:docker}
	W0717 20:48:09.588939 1020628 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 20:48:09.589105 1020628 ssh_runner.go:195] Run: systemctl --version
	I0717 20:48:09.596310 1020628 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 20:48:09.603699 1020628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 20:48:09.627561 1020628 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 20:48:09.627685 1020628 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:48:09.640075 1020628 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 20:48:09.640135 1020628 start.go:469] detecting cgroup driver to use...
	I0717 20:48:09.640179 1020628 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 20:48:09.640243 1020628 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 20:48:09.658495 1020628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 20:48:09.674932 1020628 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:48:09.675034 1020628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:48:09.693395 1020628 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:48:09.709901 1020628 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 20:48:09.837591 1020628 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:48:09.970029 1020628 docker.go:212] disabling docker service ...
	I0717 20:48:09.970139 1020628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:48:09.986868 1020628 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:48:10.003115 1020628 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:48:10.148109 1020628 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:48:10.288177 1020628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:48:10.304213 1020628 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:48:10.326063 1020628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 20:48:10.338535 1020628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 20:48:10.351000 1020628 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 20:48:10.351110 1020628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 20:48:10.363581 1020628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:48:10.376129 1020628 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 20:48:10.391480 1020628 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:48:10.420941 1020628 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:48:10.442548 1020628 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 20:48:10.462251 1020628 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:48:10.478539 1020628 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:48:10.492129 1020628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:48:10.620229 1020628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:48:10.766542 1020628 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 20:48:10.766671 1020628 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 20:48:10.777551 1020628 start.go:537] Will wait 60s for crictl version
	I0717 20:48:10.777664 1020628 ssh_runner.go:195] Run: which crictl
	I0717 20:48:10.797424 1020628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:10.895557 1020628 retry.go:31] will retry after 6.180720372s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:48:10Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unknown desc = server is not initialized yet"
	I0717 20:48:13.286911 1019794 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.21.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-200025:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (5.591986982s)
	I0717 20:48:13.286945 1019794 kic.go:199] duration metric: took 5.592162 seconds to extract preloaded images to volume
	W0717 20:48:13.287090 1019794 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 20:48:13.287204 1019794 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 20:48:13.372161 1019794 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-200025 --name missing-upgrade-200025 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-200025 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-200025 --network missing-upgrade-200025 --ip 192.168.76.2 --volume missing-upgrade-200025:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0717 20:48:13.736345 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Running}}
	I0717 20:48:13.755931 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	I0717 20:48:13.778092 1019794 cli_runner.go:164] Run: docker exec missing-upgrade-200025 stat /var/lib/dpkg/alternatives/iptables
	I0717 20:48:13.875609 1019794 oci.go:144] the created container "missing-upgrade-200025" has a running status.
	I0717 20:48:13.875635 1019794 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa...
	I0717 20:48:14.749263 1019794 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 20:48:14.784944 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	I0717 20:48:14.814263 1019794 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 20:48:14.814287 1019794 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-200025 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 20:48:14.894327 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	I0717 20:48:17.077154 1020628 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:17.140540 1020628 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.21
	RuntimeApiVersion:  v1
	I0717 20:48:17.140607 1020628 ssh_runner.go:195] Run: containerd --version
	I0717 20:48:17.181200 1020628 ssh_runner.go:195] Run: containerd --version
	I0717 20:48:17.233162 1020628 out.go:177] * Preparing Kubernetes v1.27.3 on containerd 1.6.21 ...
	I0717 20:48:17.235157 1020628 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-854415 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 20:48:17.264662 1020628 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0717 20:48:17.272139 1020628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:48:17.293018 1020628 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:48:17.293104 1020628 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:48:17.351762 1020628 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 20:48:17.351840 1020628 ssh_runner.go:195] Run: which lz4
	I0717 20:48:17.357502 1020628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 20:48:17.363017 1020628 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 20:48:17.363053 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (366036177 bytes)
	I0717 20:48:14.920501 1019794 machine.go:88] provisioning docker machine ...
	I0717 20:48:14.922319 1019794 ubuntu.go:169] provisioning hostname "missing-upgrade-200025"
	I0717 20:48:14.922388 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:14.948251 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:14.948728 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:14.948743 1019794 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-200025 && echo "missing-upgrade-200025" | sudo tee /etc/hostname
	I0717 20:48:15.137799 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-200025
	
	I0717 20:48:15.137943 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:15.178177 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:15.178652 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:15.178679 1019794 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-200025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-200025/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-200025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:48:15.318341 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:48:15.318370 1019794 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:48:15.318405 1019794 ubuntu.go:177] setting up certificates
	I0717 20:48:15.318414 1019794 provision.go:83] configureAuth start
	I0717 20:48:15.318480 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:15.343398 1019794 provision.go:138] copyHostCerts
	I0717 20:48:15.343469 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem, removing ...
	I0717 20:48:15.343481 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:48:15.343558 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:48:15.343657 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem, removing ...
	I0717 20:48:15.343668 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:48:15.343699 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:48:15.343761 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem, removing ...
	I0717 20:48:15.343768 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:48:15.343794 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:48:15.343846 1019794 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-200025 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-200025]
	I0717 20:48:15.733322 1019794 provision.go:172] copyRemoteCerts
	I0717 20:48:15.733419 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:48:15.733465 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:15.752261 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:15.842483 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:48:15.867699 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 20:48:15.891567 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 20:48:15.915006 1019794 provision.go:86] duration metric: configureAuth took 596.576101ms
	I0717 20:48:15.915030 1019794 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:48:15.915222 1019794 config.go:182] Loaded profile config "missing-upgrade-200025": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 20:48:15.915229 1019794 machine.go:91] provisioned docker machine in 992.931862ms
	I0717 20:48:15.915242 1019794 client.go:171] LocalClient.Create took 8.889120841s
	I0717 20:48:15.915256 1019794 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-200025" took 8.889167569s
	I0717 20:48:15.915264 1019794 start.go:300] post-start starting for "missing-upgrade-200025" (driver="docker")
	I0717 20:48:15.915272 1019794 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:48:15.915320 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:48:15.915358 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:15.933380 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:16.023055 1019794 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:48:16.027204 1019794 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:48:16.027246 1019794 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:48:16.027260 1019794 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:48:16.027268 1019794 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0717 20:48:16.027278 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:48:16.027346 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:48:16.027437 1019794 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> 9039972.pem in /etc/ssl/certs
	I0717 20:48:16.027557 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:48:16.037225 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:48:16.061420 1019794 start.go:303] post-start completed in 146.142001ms
	I0717 20:48:16.061878 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:16.084145 1019794 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/missing-upgrade-200025/config.json ...
	I0717 20:48:16.084448 1019794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:48:16.084501 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:16.107523 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:16.195454 1019794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:48:16.201210 1019794 start.go:128] duration metric: createHost completed in 9.179075584s
	I0717 20:48:16.201308 1019794 cli_runner.go:164] Run: docker container inspect missing-upgrade-200025 --format={{.State.Status}}
	W0717 20:48:16.219541 1019794 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 20:48:16.219572 1019794 machine.go:88] provisioning docker machine ...
	I0717 20:48:16.219590 1019794 ubuntu.go:169] provisioning hostname "missing-upgrade-200025"
	I0717 20:48:16.219659 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:16.238055 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:16.238495 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:16.238515 1019794 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-200025 && echo "missing-upgrade-200025" | sudo tee /etc/hostname
	I0717 20:48:16.377149 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-200025
	
	I0717 20:48:16.377301 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:16.397495 1019794 main.go:141] libmachine: Using SSH client type: native
	I0717 20:48:16.398095 1019794 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x39f610] 0x3a1fa0 <nil>  [] 0s} 127.0.0.1 33895 <nil> <nil>}
	I0717 20:48:16.398127 1019794 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-200025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-200025/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-200025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 20:48:16.525936 1019794 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 20:48:16.525960 1019794 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-898608/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-898608/.minikube}
	I0717 20:48:16.525985 1019794 ubuntu.go:177] setting up certificates
	I0717 20:48:16.525994 1019794 provision.go:83] configureAuth start
	I0717 20:48:16.526056 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:16.545124 1019794 provision.go:138] copyHostCerts
	I0717 20:48:16.545199 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem, removing ...
	I0717 20:48:16.545212 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem
	I0717 20:48:16.545287 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/ca.pem (1078 bytes)
	I0717 20:48:16.545382 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem, removing ...
	I0717 20:48:16.545391 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem
	I0717 20:48:16.545418 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/cert.pem (1123 bytes)
	I0717 20:48:16.545479 1019794 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem, removing ...
	I0717 20:48:16.545489 1019794 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem
	I0717 20:48:16.545517 1019794 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-898608/.minikube/key.pem (1675 bytes)
	I0717 20:48:16.545568 1019794 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-200025 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-200025]
	I0717 20:48:17.140969 1019794 provision.go:172] copyRemoteCerts
	I0717 20:48:17.141023 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 20:48:17.141070 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.165895 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.259396 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 20:48:17.295628 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 20:48:17.327805 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 20:48:17.364407 1019794 provision.go:86] duration metric: configureAuth took 838.401593ms
	I0717 20:48:17.364439 1019794 ubuntu.go:193] setting minikube options for container-runtime
	I0717 20:48:17.364687 1019794 config.go:182] Loaded profile config "missing-upgrade-200025": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0717 20:48:17.364699 1019794 machine.go:91] provisioned docker machine in 1.145121405s
	I0717 20:48:17.364713 1019794 start.go:300] post-start starting for "missing-upgrade-200025" (driver="docker")
	I0717 20:48:17.364730 1019794 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 20:48:17.364787 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 20:48:17.364826 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.398242 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.506108 1019794 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 20:48:17.511482 1019794 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 20:48:17.511526 1019794 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 20:48:17.511538 1019794 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 20:48:17.511545 1019794 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0717 20:48:17.511561 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/addons for local assets ...
	I0717 20:48:17.511650 1019794 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-898608/.minikube/files for local assets ...
	I0717 20:48:17.511744 1019794 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem -> 9039972.pem in /etc/ssl/certs
	I0717 20:48:17.511887 1019794 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 20:48:17.529904 1019794 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:48:17.563399 1019794 start.go:303] post-start completed in 198.658807ms
	I0717 20:48:17.563491 1019794 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:48:17.563562 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.595269 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.701473 1019794 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 20:48:17.713551 1019794 fix.go:56] fixHost completed within 30.556144488s
	I0717 20:48:17.713578 1019794 start.go:83] releasing machines lock for "missing-upgrade-200025", held for 30.556191298s
	I0717 20:48:17.713658 1019794 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-200025
	I0717 20:48:17.751694 1019794 ssh_runner.go:195] Run: cat /version.json
	I0717 20:48:17.751760 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.751699 1019794 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 20:48:17.751922 1019794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-200025
	I0717 20:48:17.803472 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	I0717 20:48:17.814137 1019794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33895 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/missing-upgrade-200025/id_rsa Username:docker}
	W0717 20:48:17.916095 1019794 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 20:48:17.916258 1019794 ssh_runner.go:195] Run: systemctl --version
	I0717 20:48:18.127059 1019794 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 20:48:18.134550 1019794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 20:48:18.215450 1019794 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 20:48:18.215551 1019794 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 20:48:18.274735 1019794 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 20:48:18.274761 1019794 start.go:469] detecting cgroup driver to use...
	I0717 20:48:18.274815 1019794 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 20:48:18.274886 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0717 20:48:18.305757 1019794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 20:48:18.328351 1019794 docker.go:196] disabling cri-docker service (if available) ...
	I0717 20:48:18.328409 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 20:48:18.354706 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 20:48:18.376193 1019794 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 20:48:18.396999 1019794 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 20:48:18.397071 1019794 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 20:48:18.655716 1019794 docker.go:212] disabling docker service ...
	I0717 20:48:18.655871 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 20:48:18.703628 1019794 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 20:48:18.719804 1019794 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 20:48:18.867688 1019794 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 20:48:19.186409 1019794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 20:48:19.205539 1019794 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 20:48:19.243103 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.4.1"|' /etc/containerd/config.toml"
	I0717 20:48:19.285211 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 20:48:19.335592 1019794 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 20:48:19.335692 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 20:48:19.385837 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:48:19.427462 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 20:48:19.447379 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 20:48:19.462194 1019794 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 20:48:19.476463 1019794 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 20:48:19.491579 1019794 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 20:48:19.505697 1019794 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 20:48:19.515712 1019794 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:48:19.687280 1019794 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:48:19.883714 1019794 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0717 20:48:19.883781 1019794 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0717 20:48:19.900093 1019794 start.go:537] Will wait 60s for crictl version
	I0717 20:48:19.900219 1019794 ssh_runner.go:195] Run: which crictl
	I0717 20:48:19.915529 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:20.129988 1020628 containerd.go:547] Took 2.772535 seconds to copy over tarball
	I0717 20:48:20.130081 1020628 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 20:48:22.422663 1020628 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.292552457s)
	I0717 20:48:22.422690 1020628 containerd.go:554] Took 2.292676 seconds to extract the tarball
	I0717 20:48:22.422700 1020628 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 20:48:22.480900 1020628 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 20:48:22.585988 1020628 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 20:48:22.682943 1020628 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 20:48:22.750308 1020628 containerd.go:604] all images are preloaded for containerd runtime.
	I0717 20:48:22.750330 1020628 cache_images.go:84] Images are preloaded, skipping loading
	I0717 20:48:22.750394 1020628 ssh_runner.go:195] Run: sudo crictl info
	I0717 20:48:22.804018 1020628 cni.go:84] Creating CNI manager for ""
	I0717 20:48:22.804043 1020628 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:48:22.804054 1020628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 20:48:22.804071 1020628 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-854415 NodeName:kubernetes-upgrade-854415 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 20:48:22.804211 1020628 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-854415"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 20:48:22.804284 1020628 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-854415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-854415 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 20:48:22.804352 1020628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 20:48:22.815763 1020628 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 20:48:22.815839 1020628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 20:48:22.827611 1020628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (397 bytes)
	I0717 20:48:22.850372 1020628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 20:48:22.871872 1020628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2114 bytes)
	I0717 20:48:22.893958 1020628 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 20:48:22.899394 1020628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 20:48:22.914240 1020628 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415 for IP: 192.168.67.2
	I0717 20:48:22.914271 1020628 certs.go:190] acquiring lock for shared ca certs: {Name:mk081da4b0c80820af8357079096999320bef2ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:48:22.914431 1020628 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key
	I0717 20:48:22.914490 1020628 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key
	I0717 20:48:22.914579 1020628 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/client.key
	I0717 20:48:22.914650 1020628 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/apiserver.key.c7fa3a9e
	I0717 20:48:22.914700 1020628 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/proxy-client.key
	I0717 20:48:22.914814 1020628 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/903997.pem (1338 bytes)
	W0717 20:48:22.914846 1020628 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/903997_empty.pem, impossibly tiny 0 bytes
	I0717 20:48:22.914861 1020628 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 20:48:22.914890 1020628 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/ca.pem (1078 bytes)
	I0717 20:48:22.914923 1020628 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/cert.pem (1123 bytes)
	I0717 20:48:22.914949 1020628 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/certs/home/jenkins/minikube-integration/16890-898608/.minikube/certs/key.pem (1675 bytes)
	I0717 20:48:22.915001 1020628 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem (1708 bytes)
	I0717 20:48:22.915640 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 20:48:22.944299 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 20:48:22.974055 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 20:48:23.003390 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 20:48:23.033665 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 20:48:23.062754 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 20:48:23.091564 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 20:48:23.119771 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 20:48:23.148359 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 20:48:23.177128 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/certs/903997.pem --> /usr/share/ca-certificates/903997.pem (1338 bytes)
	I0717 20:48:23.205682 1020628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/ssl/certs/9039972.pem --> /usr/share/ca-certificates/9039972.pem (1708 bytes)
	I0717 20:48:23.241651 1020628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 20:48:23.262965 1020628 ssh_runner.go:195] Run: openssl version
	I0717 20:48:23.270428 1020628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/903997.pem && ln -fs /usr/share/ca-certificates/903997.pem /etc/ssl/certs/903997.pem"
	I0717 20:48:23.282465 1020628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/903997.pem
	I0717 20:48:23.287259 1020628 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 20:20 /usr/share/ca-certificates/903997.pem
	I0717 20:48:23.287329 1020628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/903997.pem
	I0717 20:48:23.296127 1020628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/903997.pem /etc/ssl/certs/51391683.0"
	I0717 20:48:23.307375 1020628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9039972.pem && ln -fs /usr/share/ca-certificates/9039972.pem /etc/ssl/certs/9039972.pem"
	I0717 20:48:23.319739 1020628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9039972.pem
	I0717 20:48:23.324611 1020628 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 20:20 /usr/share/ca-certificates/9039972.pem
	I0717 20:48:23.324724 1020628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9039972.pem
	I0717 20:48:23.333769 1020628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9039972.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 20:48:23.344966 1020628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 20:48:23.356769 1020628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:48:23.361615 1020628 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 20:15 /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:48:23.361695 1020628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 20:48:23.370328 1020628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 20:48:23.381087 1020628 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 20:48:23.385666 1020628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 20:48:23.394557 1020628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 20:48:23.403440 1020628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 20:48:23.412168 1020628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 20:48:23.420733 1020628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 20:48:23.429543 1020628 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 20:48:23.438606 1020628 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-854415 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-854415 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:48:23.438724 1020628 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0717 20:48:23.438792 1020628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:48:23.485429 1020628 cri.go:89] found id: ""
	I0717 20:48:23.485502 1020628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 20:48:23.496708 1020628 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 20:48:23.497346 1020628 kubeadm.go:636] restartCluster start
	I0717 20:48:23.497426 1020628 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 20:48:23.508779 1020628 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 20:48:23.509469 1020628 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-854415" does not appear in /home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:48:23.509715 1020628 kubeconfig.go:146] "kubernetes-upgrade-854415" context is missing from /home/jenkins/minikube-integration/16890-898608/kubeconfig - will repair!
	I0717 20:48:23.510245 1020628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/kubeconfig: {Name:mk933d9b210c77bbf248211a6ac799f4302f2fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:48:23.511312 1020628 kapi.go:59] client config for kubernetes-upgrade-854415: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/profiles/kubernetes-upgrade-854415/client.key", CAFile:"/home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x13e6910), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 20:48:23.513488 1020628 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 20:48:23.525464 1020628 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-07-17 20:47:05.829136314 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-07-17 20:48:22.885457338 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /run/containerd/containerd.sock
	+  criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-854415"
	   kubeletExtraArgs:
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-854415
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.27.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0717 20:48:23.525542 1020628 kubeadm.go:1128] stopping kube-system containers ...
	I0717 20:48:23.525561 1020628 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0717 20:48:23.525627 1020628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 20:48:23.568890 1020628 cri.go:89] found id: ""
	I0717 20:48:23.568969 1020628 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 20:48:23.584177 1020628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 20:48:23.595792 1020628 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Jul 17 20:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5739 Jul 17 20:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5819 Jul 17 20:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Jul 17 20:47 /etc/kubernetes/scheduler.conf
	
	I0717 20:48:23.595894 1020628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 20:48:23.607091 1020628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 20:48:23.618339 1020628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 20:48:23.629851 1020628 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 20:48:23.640978 1020628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 20:48:23.652073 1020628 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 20:48:23.652138 1020628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 20:48:23.716085 1020628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 20:48:20.034148 1019794 retry.go:31] will retry after 8.886676059s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:48:20Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 20:48:25.901800 1020628 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.185678265s)
	I0717 20:48:25.901880 1020628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 20:48:26.091747 1020628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 20:48:26.192221 1020628 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 20:48:26.279322 1020628 api_server.go:52] waiting for apiserver process to appear ...
	I0717 20:48:26.279393 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:26.794050 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:27.294045 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:27.793590 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:28.293542 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:28.793631 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:28.923834 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:28.952820 1019794 retry.go:31] will retry after 12.265934816s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:48:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 20:48:29.294113 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:29.794486 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:30.293610 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:30.793609 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:31.293590 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:31.794219 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:32.294299 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:32.793584 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:33.293617 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:33.794139 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:34.294134 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:34.793635 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:35.293608 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:35.793852 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:36.293842 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:36.793971 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:37.294085 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:37.793618 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:38.293522 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:38.793607 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:39.294241 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:39.794313 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:40.293547 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:40.793604 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:41.294109 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:41.793509 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:42.294358 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:42.794208 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:43.294134 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:43.793642 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:41.218986 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:48:41.250390 1019794 retry.go:31] will retry after 27.521724164s: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:48:41Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0717 20:48:44.294375 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:44.793585 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:45.294376 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:45.794418 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:46.293815 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:46.794434 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:47.294214 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:47.793577 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:48.293760 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:48.793598 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:49.293614 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:49.793917 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:50.294329 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:50.793687 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:51.293876 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:51.793631 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:52.294594 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:52.794168 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:53.293511 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:53.794462 1020628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:48:53.813403 1020628 api_server.go:72] duration metric: took 27.534081666s to wait for apiserver process to appear ...
	I0717 20:48:53.813432 1020628 api_server.go:88] waiting for apiserver healthz status ...
	I0717 20:48:53.813449 1020628 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 20:48:58.815680 1020628 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 20:48:59.316599 1020628 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 20:49:08.775473 1019794 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 20:49:08.810084 1019794 out.go:177] 
	W0717 20:49:08.813462 1019794 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to start container runtime: Temporary Error: sudo /usr/bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:08Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0717 20:49:08.813495 1019794 out.go:239] * 
	W0717 20:49:08.815867 1019794 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 20:49:08.818587 1019794 out.go:177] 
	I0717 20:49:04.317614 1020628 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 20:49:04.317652 1020628 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	
	* 
	* ==> container status <==
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2023-07-17 20:48:14 UTC, end at Mon 2023-07-17 20:49:10 UTC. --
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.871528357Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.871670470Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.871766856Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.872079055Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.872438564Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.873530613Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.875715829Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.875891739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.876189308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.876334490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.876694385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.877174797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.877570745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.877699443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.878027297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.878638220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.879054755Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.879344693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.879616883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.879720924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.879923059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.880708834Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.881136996Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Jul 17 20:48:19 missing-upgrade-200025 systemd[1]: Started containerd container runtime.
	Jul 17 20:48:19 missing-upgrade-200025 containerd[631]: time="2023-07-17T20:48:19.883474442Z" level=info msg="containerd successfully booted in 0.111593s"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001106] FS-Cache: O-key=[8] '4c72ed0000000000'
	[  +0.000795] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000f945483a
	[  +0.001136] FS-Cache: N-key=[8] '4c72ed0000000000'
	[  +0.003226] FS-Cache: Duplicate cookie detected
	[  +0.000785] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001034] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000fde5d07c
	[  +0.001069] FS-Cache: O-key=[8] '4c72ed0000000000'
	[  +0.000738] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=0000000090c83777
	[  +0.001137] FS-Cache: N-key=[8] '4c72ed0000000000'
	[  +2.706152] FS-Cache: Duplicate cookie detected
	[  +0.000727] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000960] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=00000000c5602f0f
	[  +0.001080] FS-Cache: O-key=[8] '4b72ed0000000000'
	[  +0.000733] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000f945483a
	[  +0.001049] FS-Cache: N-key=[8] '4b72ed0000000000'
	[  +0.350862] FS-Cache: Duplicate cookie detected
	[  +0.000762] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001000] FS-Cache: O-cookie d=00000000b49df5bc{9p.inode} n=000000000634e0be
	[  +0.001088] FS-Cache: O-key=[8] '5172ed0000000000'
	[  +0.000720] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=00000000b49df5bc{9p.inode} n=00000000a176162d
	[  +0.001049] FS-Cache: N-key=[8] '5172ed0000000000'
	
	* 
	* ==> kernel <==
	*  20:49:10 up  4:31,  0 users,  load average: 1.32, 2.03, 2.03
	Linux missing-upgrade-200025 5.15.0-1039-aws #44~20.04.1-Ubuntu SMP Thu Jun 22 12:21:08 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2023-07-17 20:48:14 UTC, end at Mon 2023-07-17 20:49:10 UTC. --
	-- No entries --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 20:49:09.563691 1023613 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:09.591920 1023613 logs.go:281] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:09.622969 1023613 logs.go:281] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:09.651648 1023613 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:09.681098 1023613 logs.go:281] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:09.710130 1023613 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:09.741438 1023613 logs.go:281] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:09.772691 1023613 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	E0717 20:49:10.127724 1023613 logs.go:195] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-07-17T20:49:09Z" level=fatal msg="listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2023-07-17T20:49:09Z\" level=fatal msg=\"listing containers: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E0717 20:49:10.522006 1023613 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p missing-upgrade-200025 -n missing-upgrade-200025
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p missing-upgrade-200025 -n missing-upgrade-200025: exit status 2 (321.801222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "missing-upgrade-200025" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "missing-upgrade-200025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-200025
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-200025: (2.00974655s)
--- FAIL: TestMissingContainerUpgrade (244.83s)

                                                
                                    

Test pass (266/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.64
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.27.3/json-events 9.86
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.63
22 TestAddons/Setup 133.32
24 TestAddons/parallel/Registry 15.08
26 TestAddons/parallel/InspektorGadget 10.95
27 TestAddons/parallel/MetricsServer 5.95
30 TestAddons/parallel/CSI 52.12
31 TestAddons/parallel/Headlamp 11.9
32 TestAddons/parallel/CloudSpanner 5.78
35 TestAddons/serial/GCPAuth/Namespaces 0.17
36 TestAddons/StoppedEnableDisable 12.35
37 TestCertOptions 39.92
38 TestCertExpiration 246.67
40 TestForceSystemdFlag 43.19
41 TestForceSystemdEnv 39.92
48 TestErrorSpam/start 0.89
49 TestErrorSpam/status 1.15
50 TestErrorSpam/pause 1.84
51 TestErrorSpam/unpause 2.17
52 TestErrorSpam/stop 1.49
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 57.65
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 17.85
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.1
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.39
64 TestFunctional/serial/CacheCmd/cache/add_local 1.38
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.48
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.44
69 TestFunctional/serial/CacheCmd/cache/delete 0.11
70 TestFunctional/serial/MinikubeKubectlCmd 0.15
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
72 TestFunctional/serial/ExtraConfig 54.02
73 TestFunctional/serial/ComponentHealth 0.1
74 TestFunctional/serial/LogsCmd 1.88
75 TestFunctional/serial/LogsFileCmd 2.11
76 TestFunctional/serial/InvalidService 4.43
78 TestFunctional/parallel/ConfigCmd 0.51
79 TestFunctional/parallel/DashboardCmd 11.08
80 TestFunctional/parallel/DryRun 0.72
81 TestFunctional/parallel/InternationalLanguage 0.22
82 TestFunctional/parallel/StatusCmd 1.32
86 TestFunctional/parallel/ServiceCmdConnect 7.71
87 TestFunctional/parallel/AddonsCmd 0.21
88 TestFunctional/parallel/PersistentVolumeClaim 26.45
90 TestFunctional/parallel/SSHCmd 0.81
91 TestFunctional/parallel/CpCmd 1.56
93 TestFunctional/parallel/FileSync 0.41
94 TestFunctional/parallel/CertSync 2.15
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.8
102 TestFunctional/parallel/License 0.39
103 TestFunctional/parallel/Version/short 0.08
104 TestFunctional/parallel/Version/components 1.05
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
109 TestFunctional/parallel/ImageCommands/ImageBuild 3.68
110 TestFunctional/parallel/ImageCommands/Setup 2.64
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.37
118 TestFunctional/parallel/ServiceCmd/List 0.48
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
121 TestFunctional/parallel/ServiceCmd/Format 0.53
122 TestFunctional/parallel/ServiceCmd/URL 0.53
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.7
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.73
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
139 TestFunctional/parallel/ProfileCmd/profile_list 0.43
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
141 TestFunctional/parallel/MountCmd/any-port 7.51
142 TestFunctional/parallel/MountCmd/specific-port 2.63
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.13
144 TestFunctional/delete_addon-resizer_images 0.1
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 91.55
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.6
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.87
157 TestJSONOutput/start/Command 53.42
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.8
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.8
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.87
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.24
182 TestKicCustomNetwork/create_custom_network 46.56
183 TestKicCustomNetwork/use_default_bridge_network 33.89
184 TestKicExistingNetwork 36.95
185 TestKicCustomSubnet 35.79
186 TestKicStaticIP 36.74
187 TestMainNoArgs 0.05
188 TestMinikubeProfile 73.72
191 TestMountStart/serial/StartWithMountFirst 6.37
192 TestMountStart/serial/VerifyMountFirst 0.28
193 TestMountStart/serial/StartWithMountSecond 6.82
194 TestMountStart/serial/VerifyMountSecond 0.34
195 TestMountStart/serial/DeleteFirst 1.71
196 TestMountStart/serial/VerifyMountPostDelete 0.29
197 TestMountStart/serial/Stop 1.22
198 TestMountStart/serial/RestartStopped 7.74
199 TestMountStart/serial/VerifyMountPostStop 0.29
202 TestMultiNode/serial/FreshStart2Nodes 77.16
203 TestMultiNode/serial/DeployApp2Nodes 5.02
204 TestMultiNode/serial/PingHostFrom2Pods 1.16
205 TestMultiNode/serial/AddNode 18.91
206 TestMultiNode/serial/ProfileList 0.34
207 TestMultiNode/serial/CopyFile 10.93
208 TestMultiNode/serial/StopNode 2.39
209 TestMultiNode/serial/StartAfterStop 23.85
210 TestMultiNode/serial/RestartKeepsNodes 140.35
211 TestMultiNode/serial/DeleteNode 5.13
212 TestMultiNode/serial/StopMultiNode 24.41
213 TestMultiNode/serial/RestartMultiNode 106.36
214 TestMultiNode/serial/ValidateNameConflict 45.55
219 TestPreload 149.22
221 TestScheduledStopUnix 119.28
224 TestInsufficientStorage 10.86
225 TestRunningBinaryUpgrade 136.84
227 TestKubernetesUpgrade 440.74
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
231 TestNoKubernetes/serial/StartWithK8s 37.45
232 TestNoKubernetes/serial/StartWithStopK8s 30.2
233 TestNoKubernetes/serial/Start 9.06
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
235 TestNoKubernetes/serial/ProfileList 1.1
236 TestNoKubernetes/serial/Stop 1.24
237 TestNoKubernetes/serial/StartNoArgs 7.33
238 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
239 TestStoppedBinaryUpgrade/Setup 1.14
240 TestStoppedBinaryUpgrade/Upgrade 153.09
241 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
250 TestPause/serial/Start 65.27
258 TestNetworkPlugins/group/false 3.79
262 TestPause/serial/SecondStartNoReconfiguration 15.59
263 TestPause/serial/Pause 1.03
264 TestPause/serial/VerifyStatus 0.52
265 TestPause/serial/Unpause 1.06
266 TestPause/serial/PauseAgain 1.26
267 TestPause/serial/DeletePaused 3.1
268 TestPause/serial/VerifyDeletedResources 0.39
270 TestStartStop/group/old-k8s-version/serial/FirstStart 138.62
271 TestStartStop/group/old-k8s-version/serial/DeployApp 8.6
272 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
273 TestStartStop/group/old-k8s-version/serial/Stop 12.15
274 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
275 TestStartStop/group/old-k8s-version/serial/SecondStart 676.91
277 TestStartStop/group/no-preload/serial/FirstStart 70.3
278 TestStartStop/group/no-preload/serial/DeployApp 8.52
279 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.26
280 TestStartStop/group/no-preload/serial/Stop 12.16
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
282 TestStartStop/group/no-preload/serial/SecondStart 354.95
283 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
284 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
285 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
286 TestStartStop/group/no-preload/serial/Pause 3.47
288 TestStartStop/group/embed-certs/serial/FirstStart 96.82
289 TestStartStop/group/embed-certs/serial/DeployApp 8.47
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.32
291 TestStartStop/group/embed-certs/serial/Stop 12.14
292 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
293 TestStartStop/group/embed-certs/serial/SecondStart 349.49
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
295 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
296 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
297 TestStartStop/group/old-k8s-version/serial/Pause 3.59
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.72
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.49
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.34
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.17
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.65
305 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.03
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
307 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
308 TestStartStop/group/embed-certs/serial/Pause 3.62
310 TestStartStop/group/newest-cni/serial/FirstStart 50.23
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.44
313 TestStartStop/group/newest-cni/serial/Stop 1.3
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
315 TestStartStop/group/newest-cni/serial/SecondStart 41.97
316 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
319 TestStartStop/group/newest-cni/serial/Pause 3.41
320 TestNetworkPlugins/group/auto/Start 92.83
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.04
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.21
323 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
324 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.49
325 TestNetworkPlugins/group/kindnet/Start 83.19
326 TestNetworkPlugins/group/auto/KubeletFlags 0.4
327 TestNetworkPlugins/group/auto/NetCatPod 11.51
328 TestNetworkPlugins/group/auto/DNS 0.26
329 TestNetworkPlugins/group/auto/Localhost 0.26
330 TestNetworkPlugins/group/auto/HairPin 0.28
331 TestNetworkPlugins/group/calico/Start 68.17
332 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
333 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
334 TestNetworkPlugins/group/kindnet/NetCatPod 9.5
335 TestNetworkPlugins/group/kindnet/DNS 0.31
336 TestNetworkPlugins/group/kindnet/Localhost 0.23
337 TestNetworkPlugins/group/kindnet/HairPin 0.29
338 TestNetworkPlugins/group/calico/ControllerPod 5.05
339 TestNetworkPlugins/group/custom-flannel/Start 68.61
340 TestNetworkPlugins/group/calico/KubeletFlags 0.47
341 TestNetworkPlugins/group/calico/NetCatPod 13.73
342 TestNetworkPlugins/group/calico/DNS 0.27
343 TestNetworkPlugins/group/calico/Localhost 0.23
344 TestNetworkPlugins/group/calico/HairPin 0.23
345 TestNetworkPlugins/group/enable-default-cni/Start 89.31
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.52
348 TestNetworkPlugins/group/custom-flannel/DNS 0.23
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
351 TestNetworkPlugins/group/flannel/Start 68.46
352 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
353 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.61
354 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
355 TestNetworkPlugins/group/enable-default-cni/Localhost 0.27
356 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
357 TestNetworkPlugins/group/bridge/Start 48.98
358 TestNetworkPlugins/group/flannel/ControllerPod 5.04
359 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
360 TestNetworkPlugins/group/flannel/NetCatPod 10.44
361 TestNetworkPlugins/group/flannel/DNS 0.26
362 TestNetworkPlugins/group/flannel/Localhost 0.18
363 TestNetworkPlugins/group/flannel/HairPin 0.21
364 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
365 TestNetworkPlugins/group/bridge/NetCatPod 9.35
366 TestNetworkPlugins/group/bridge/DNS 0.2
367 TestNetworkPlugins/group/bridge/Localhost 0.22
368 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (17.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-062474 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-062474 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.638754783s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-062474
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-062474: exit status 85 (84.131662ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-062474 | jenkins | v1.30.1 | 17 Jul 23 20:14 UTC |          |
	|         | -p download-only-062474        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:14:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:14:34.402561  904002 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:14:34.402830  904002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:14:34.402857  904002 out.go:309] Setting ErrFile to fd 2...
	I0717 20:14:34.402878  904002 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:14:34.403188  904002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	W0717 20:14:34.403348  904002 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-898608/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-898608/.minikube/config/config.json: no such file or directory
	I0717 20:14:34.403804  904002 out.go:303] Setting JSON to true
	I0717 20:14:34.404872  904002 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14222,"bootTime":1689610653,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:14:34.404966  904002 start.go:138] virtualization:  
	I0717 20:14:34.407806  904002 out.go:97] [download-only-062474] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:14:34.409880  904002 out.go:169] MINIKUBE_LOCATION=16890
	W0717 20:14:34.408008  904002 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 20:14:34.408062  904002 notify.go:220] Checking for updates...
	I0717 20:14:34.412378  904002 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:14:34.414372  904002 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:14:34.416710  904002 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:14:34.418615  904002 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 20:14:34.423385  904002 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 20:14:34.424099  904002 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:14:34.448150  904002 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:14:34.448237  904002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:14:34.532166  904002 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-17 20:14:34.521982093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:14:34.532274  904002 docker.go:294] overlay module found
	I0717 20:14:34.534522  904002 out.go:97] Using the docker driver based on user configuration
	I0717 20:14:34.534552  904002 start.go:298] selected driver: docker
	I0717 20:14:34.534559  904002 start.go:880] validating driver "docker" against <nil>
	I0717 20:14:34.534657  904002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:14:34.599459  904002 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-07-17 20:14:34.590133348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:14:34.599629  904002 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 20:14:34.599900  904002 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0717 20:14:34.600059  904002 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 20:14:34.602241  904002 out.go:169] Using Docker driver with root privileges
	I0717 20:14:34.604084  904002 cni.go:84] Creating CNI manager for ""
	I0717 20:14:34.604104  904002 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:14:34.604130  904002 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 20:14:34.604148  904002 start_flags.go:319] config:
	{Name:download-only-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-062474 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:14:34.606445  904002 out.go:97] Starting control plane node download-only-062474 in cluster download-only-062474
	I0717 20:14:34.606494  904002 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 20:14:34.608495  904002 out.go:97] Pulling base image ...
	I0717 20:14:34.608520  904002 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 20:14:34.608656  904002 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 20:14:34.630013  904002 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 20:14:34.630665  904002 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 20:14:34.630775  904002 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 20:14:34.704775  904002 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0717 20:14:34.704810  904002 cache.go:57] Caching tarball of preloaded images
	I0717 20:14:34.704978  904002 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 20:14:34.707954  904002 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 20:14:34.707987  904002 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:14:34.830759  904002 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0717 20:14:40.473546  904002 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 20:14:43.040828  904002 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:14:43.040956  904002 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:14:44.088173  904002 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0717 20:14:44.088526  904002 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/download-only-062474/config.json ...
	I0717 20:14:44.088563  904002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/download-only-062474/config.json: {Name:mk9c469606d7e46841a0a9aef41604bff791d5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 20:14:44.088738  904002 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0717 20:14:44.088951  904002 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/16890-898608/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-062474"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (9.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-062474 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-062474 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.856541996s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (9.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-062474
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-062474: exit status 85 (90.755307ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-062474 | jenkins | v1.30.1 | 17 Jul 23 20:14 UTC |          |
	|         | -p download-only-062474        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-062474 | jenkins | v1.30.1 | 17 Jul 23 20:14 UTC |          |
	|         | -p download-only-062474        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 20:14:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.20.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 20:14:52.131001  904077 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:14:52.131236  904077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:14:52.131265  904077 out.go:309] Setting ErrFile to fd 2...
	I0717 20:14:52.131286  904077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:14:52.131567  904077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	W0717 20:14:52.131718  904077 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-898608/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-898608/.minikube/config/config.json: no such file or directory
	I0717 20:14:52.132016  904077 out.go:303] Setting JSON to true
	I0717 20:14:52.133035  904077 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14239,"bootTime":1689610653,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:14:52.133140  904077 start.go:138] virtualization:  
	I0717 20:14:52.135880  904077 out.go:97] [download-only-062474] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:14:52.137741  904077 out.go:169] MINIKUBE_LOCATION=16890
	I0717 20:14:52.136237  904077 notify.go:220] Checking for updates...
	I0717 20:14:52.139899  904077 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:14:52.141786  904077 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:14:52.143880  904077 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:14:52.145695  904077 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0717 20:14:52.149409  904077 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 20:14:52.149940  904077 config.go:182] Loaded profile config "download-only-062474": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0717 20:14:52.150016  904077 start.go:788] api.Load failed for download-only-062474: filestore "download-only-062474": Docker machine "download-only-062474" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 20:14:52.150154  904077 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 20:14:52.150182  904077 start.go:788] api.Load failed for download-only-062474: filestore "download-only-062474": Docker machine "download-only-062474" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 20:14:52.177819  904077 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:14:52.177915  904077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:14:52.259015  904077 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 20:14:52.248956181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:14:52.259121  904077 docker.go:294] overlay module found
	I0717 20:14:52.261265  904077 out.go:97] Using the docker driver based on existing profile
	I0717 20:14:52.261301  904077 start.go:298] selected driver: docker
	I0717 20:14:52.261309  904077 start.go:880] validating driver "docker" against &{Name:download-only-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-062474 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetP
ath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:14:52.261508  904077 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:14:52.328453  904077 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-07-17 20:14:52.318026053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:14:52.328937  904077 cni.go:84] Creating CNI manager for ""
	I0717 20:14:52.328954  904077 cni.go:149] "docker" driver + "containerd" runtime found, recommending kindnet
	I0717 20:14:52.328969  904077 start_flags.go:319] config:
	{Name:download-only-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-062474 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:14:52.331496  904077 out.go:97] Starting control plane node download-only-062474 in cluster download-only-062474
	I0717 20:14:52.331549  904077 cache.go:122] Beginning downloading kic base image for docker with containerd
	I0717 20:14:52.333752  904077 out.go:97] Pulling base image ...
	I0717 20:14:52.333785  904077 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:14:52.333847  904077 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 20:14:52.352075  904077 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 20:14:52.352208  904077 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 20:14:52.352229  904077 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 20:14:52.352234  904077 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 20:14:52.352242  904077 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 20:14:52.409405  904077 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	I0717 20:14:52.409431  904077 cache.go:57] Caching tarball of preloaded images
	I0717 20:14:52.410008  904077 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime containerd
	I0717 20:14:52.412461  904077 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 20:14:52.412489  904077 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4 ...
	I0717 20:14:52.524067  904077 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:14a60dcdae19ae70139b18fd027fe33b -> /home/jenkins/minikube-integration/16890-898608/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-062474"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-062474
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-408945 --alsologtostderr --binary-mirror http://127.0.0.1:34645 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-408945" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-408945
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/Setup (133.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-911602 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-911602 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m13.324109073s)
--- PASS: TestAddons/Setup (133.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 37.647287ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jpmhq" [474f72fc-16e6-4e88-a1ec-a561c39042c6] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.02024225s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-brg6s" [44b8f712-5ae3-4448-a5c6-6deffb38d32b] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013918884s
addons_test.go:316: (dbg) Run:  kubectl --context addons-911602 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-911602 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-911602 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.8886568s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 ip
2023/07/17 20:17:31 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nlhs5" [98809372-9316-46db-a2c8-992fe231c225] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011644078s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-911602
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-911602: (5.936089883s)
--- PASS: TestAddons/parallel/InspektorGadget (10.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.019918ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-nmrz5" [65b74af8-a62f-4378-a2df-10dfff824c7a] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011627715s
addons_test.go:391: (dbg) Run:  kubectl --context addons-911602 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.892545ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-911602 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-911602 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [43a12bd6-4e09-4b29-a1ba-19d3419b54f6] Pending
helpers_test.go:344: "task-pv-pod" [43a12bd6-4e09-4b29-a1ba-19d3419b54f6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [43a12bd6-4e09-4b29-a1ba-19d3419b54f6] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.014386831s
addons_test.go:560: (dbg) Run:  kubectl --context addons-911602 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-911602 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-911602 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-911602 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-911602 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-911602 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-911602 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-911602 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c7904f8b-1989-4826-847a-259a66d22514] Pending
helpers_test.go:344: "task-pv-pod-restore" [c7904f8b-1989-4826-847a-259a66d22514] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.019132545s
addons_test.go:602: (dbg) Run:  kubectl --context addons-911602 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-911602 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-911602 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-911602 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.843242048s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-911602 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-911602 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-911602 --alsologtostderr -v=1: (1.876798449s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-lrwdw" [0e09845f-8a6b-4d6c-8c45-74347f1a49b9] Pending
helpers_test.go:344: "headlamp-66f6498c69-lrwdw" [0e09845f-8a6b-4d6c-8c45-74347f1a49b9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-lrwdw" [0e09845f-8a6b-4d6c-8c45-74347f1a49b9] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-lrwdw" [0e09845f-8a6b-4d6c-8c45-74347f1a49b9] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.019625963s
--- PASS: TestAddons/parallel/Headlamp (11.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-6g99r" [185fca7d-20f0-4a0c-a3f7-e7e59b3388a5] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011120648s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-911602
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-911602 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-911602 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-911602
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-911602: (12.064400796s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-911602
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-911602
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-911602
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (39.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-270882 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0717 20:55:48.696229  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-270882 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.976660855s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-270882 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-270882 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-270882 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-270882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-270882
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-270882: (2.115764578s)
--- PASS: TestCertOptions (39.92s)

                                                
                                    
x
+
TestCertExpiration (246.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-034244 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-034244 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (44.70778526s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-034244 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-034244 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (19.635729315s)
helpers_test.go:175: Cleaning up "cert-expiration-034244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-034244
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-034244: (2.324130289s)
--- PASS: TestCertExpiration (246.67s)

                                                
                                    
x
+
TestForceSystemdFlag (43.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-358752 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-358752 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.288186711s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-358752 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-358752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-358752
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-358752: (2.351526893s)
--- PASS: TestForceSystemdFlag (43.19s)

                                                
                                    
x
+
TestForceSystemdEnv (39.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-938256 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-938256 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.231159319s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-938256 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-938256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-938256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-938256: (2.251694035s)
--- PASS: TestForceSystemdEnv (39.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 start --dry-run
--- PASS: TestErrorSpam/start (0.89s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 unpause
--- PASS: TestErrorSpam/unpause (2.17s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 stop: (1.259901074s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-858640 --log_dir /tmp/nospam-858640 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16890-898608/.minikube/files/etc/test/nested/copy/903997/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-949323 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-949323 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (57.653443275s)
--- PASS: TestFunctional/serial/StartWithProxy (57.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (17.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-949323 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-949323 --alsologtostderr -v=8: (17.845626372s)
functional_test.go:659: soft start took 17.846119263s for "functional-949323" cluster.
--- PASS: TestFunctional/serial/SoftStart (17.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-949323 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 cache add registry.k8s.io/pause:3.1: (1.549564416s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 cache add registry.k8s.io/pause:3.3: (1.461910705s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 cache add registry.k8s.io/pause:latest: (1.381840421s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-949323 /tmp/TestFunctionalserialCacheCmdcacheadd_local1064893915/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cache add minikube-local-cache-test:functional-949323
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cache delete minikube-local-cache-test:functional-949323
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-949323
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (337.097106ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 cache reload: (1.375031563s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 kubectl -- --context functional-949323 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-949323 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-949323 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 20:22:17.100429  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:17.106020  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:17.116377  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:17.136707  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:17.177040  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:17.257526  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:17.417951  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:17.738525  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:18.379430  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:19.659935  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:22.220462  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:27.341525  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:22:37.582663  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-949323 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.023202082s)
functional_test.go:757: restart took 54.023291469s for "functional-949323" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (54.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-949323 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 logs
E0717 20:22:58.063574  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 logs: (1.876077437s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 logs --file /tmp/TestFunctionalserialLogsFileCmd1966160969/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 logs --file /tmp/TestFunctionalserialLogsFileCmd1966160969/001/logs.txt: (2.109974069s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.11s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-949323 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-949323
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-949323: exit status 115 (467.456195ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30754 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-949323 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 config get cpus: exit status 14 (66.73672ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 config get cpus: exit status 14 (70.80133ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-949323 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-949323 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 934863: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-949323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-949323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (401.412129ms)

                                                
                                                
-- stdout --
	* [functional-949323] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 20:23:51.865553  934209 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:23:51.865759  934209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:23:51.865764  934209 out.go:309] Setting ErrFile to fd 2...
	I0717 20:23:51.865769  934209 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:23:51.867016  934209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:23:51.867429  934209 out.go:303] Setting JSON to false
	I0717 20:23:51.871193  934209 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14779,"bootTime":1689610653,"procs":354,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:23:51.871278  934209 start.go:138] virtualization:  
	I0717 20:23:51.875342  934209 out.go:177] * [functional-949323] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:23:51.877469  934209 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:23:51.879217  934209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:23:51.877530  934209 notify.go:220] Checking for updates...
	I0717 20:23:51.883665  934209 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:23:51.886484  934209 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:23:51.888409  934209 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:23:51.890225  934209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:23:51.892610  934209 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:23:51.893181  934209 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:23:51.959673  934209 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:23:51.959821  934209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:23:52.104042  934209 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 20:23:52.092288783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:23:52.104155  934209 docker.go:294] overlay module found
	I0717 20:23:52.108477  934209 out.go:177] * Using the docker driver based on existing profile
	I0717 20:23:52.110850  934209 start.go:298] selected driver: docker
	I0717 20:23:52.110872  934209 start.go:880] validating driver "docker" against &{Name:functional-949323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-949323 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:23:52.110988  934209 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:23:52.117648  934209 out.go:177] 
	W0717 20:23:52.120230  934209 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 20:23:52.122885  934209 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-949323 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-949323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-949323 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (216.248131ms)

                                                
                                                
-- stdout --
	* [functional-949323] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 20:23:52.503081  934417 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:23:52.503311  934417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:23:52.503337  934417 out.go:309] Setting ErrFile to fd 2...
	I0717 20:23:52.503356  934417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:23:52.503767  934417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:23:52.504187  934417 out.go:303] Setting JSON to false
	I0717 20:23:52.505348  934417 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14780,"bootTime":1689610653,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:23:52.505441  934417 start.go:138] virtualization:  
	I0717 20:23:52.508191  934417 out.go:177] * [functional-949323] minikube v1.30.1 sur Ubuntu 20.04 (arm64)
	I0717 20:23:52.510467  934417 notify.go:220] Checking for updates...
	I0717 20:23:52.512931  934417 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:23:52.515157  934417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:23:52.516990  934417 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:23:52.519258  934417 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:23:52.521155  934417 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:23:52.523162  934417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:23:52.525420  934417 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:23:52.526125  934417 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:23:52.551702  934417 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:23:52.551798  934417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:23:52.646858  934417 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-07-17 20:23:52.635893188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:23:52.646963  934417 docker.go:294] overlay module found
	I0717 20:23:52.649541  934417 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 20:23:52.651655  934417 start.go:298] selected driver: docker
	I0717 20:23:52.651679  934417 start.go:880] validating driver "docker" against &{Name:functional-949323 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-949323 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 20:23:52.651826  934417 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:23:52.654573  934417 out.go:177] 
	W0717 20:23:52.656447  934417 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 20:23:52.658543  934417 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-949323 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-949323 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-xpl8q" [0e692402-8fd8-4f33-88fd-b328bc745788] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-xpl8q" [0e692402-8fd8-4f33-88fd-b328bc745788] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008398777s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31089
functional_test.go:1674: http://192.168.49.2:31089: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-xpl8q

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31089
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d1461322-6f9a-44b8-95ed-09448a5ba4ad] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011441028s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-949323 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-949323 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-949323 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-949323 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [252218a6-85ef-44db-90c3-707131ade1d8] Pending
helpers_test.go:344: "sp-pod" [252218a6-85ef-44db-90c3-707131ade1d8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [252218a6-85ef-44db-90c3-707131ade1d8] Running
E0717 20:23:39.024211  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.016688593s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-949323 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-949323 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-949323 delete -f testdata/storage-provisioner/pod.yaml: (1.571339029s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-949323 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [060d9ded-d280-4083-913a-637b827ed3d5] Pending
helpers_test.go:344: "sp-pod" [060d9ded-d280-4083-913a-637b827ed3d5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [060d9ded-d280-4083-913a-637b827ed3d5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.016645052s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-949323 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.45s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh -n functional-949323 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 cp functional-949323:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd32475472/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh -n functional-949323 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/903997/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo cat /etc/test/nested/copy/903997/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/903997.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo cat /etc/ssl/certs/903997.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/903997.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo cat /usr/share/ca-certificates/903997.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/9039972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo cat /etc/ssl/certs/9039972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/9039972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo cat /usr/share/ca-certificates/9039972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-949323 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 ssh "sudo systemctl is-active docker": exit status 1 (391.189467ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 ssh "sudo systemctl is-active crio": exit status 1 (407.386082ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 version -o=json --components: (1.050810372s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-949323 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-949323
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-949323 image ls --format short --alsologtostderr:
I0717 20:23:58.223720  935421 out.go:296] Setting OutFile to fd 1 ...
I0717 20:23:58.224211  935421 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:23:58.224244  935421 out.go:309] Setting ErrFile to fd 2...
I0717 20:23:58.224646  935421 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:23:58.225481  935421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
I0717 20:23:58.226516  935421 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:23:58.226724  935421 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:23:58.227442  935421 cli_runner.go:164] Run: docker container inspect functional-949323 --format={{.State.Status}}
I0717 20:23:58.257884  935421 ssh_runner.go:195] Run: systemctl --version
I0717 20:23:58.257935  935421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-949323
I0717 20:23:58.292690  935421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33730 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/functional-949323/id_rsa Username:docker}
I0717 20:23:58.387042  935421 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-949323 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b18bf7 | 25.3MB |
| docker.io/library/minikube-local-cache-test | functional-949323  | sha256:b14a18 | 1.01kB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:24bc64 | 80.7MB |
| registry.k8s.io/kube-apiserver              | v1.27.3            | sha256:39dfb0 | 30.4MB |
| registry.k8s.io/kube-controller-manager     | v1.27.3            | sha256:ab3683 | 28.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | alpine             | sha256:66bf2c | 16.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| docker.io/library/nginx                     | latest             | sha256:2002d3 | 67.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-proxy                  | v1.27.3            | sha256:fb73e9 | 21.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| localhost/my-image                          | functional-949323  | sha256:c4ce61 | 831kB  |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.27.3            | sha256:bcb9e5 | 16.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-949323 image ls --format table --alsologtostderr:
I0717 20:24:02.737314  935807 out.go:296] Setting OutFile to fd 1 ...
I0717 20:24:02.737583  935807 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:24:02.737617  935807 out.go:309] Setting ErrFile to fd 2...
I0717 20:24:02.737638  935807 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:24:02.737957  935807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
I0717 20:24:02.738692  935807 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:24:02.738928  935807 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:24:02.739521  935807 cli_runner.go:164] Run: docker container inspect functional-949323 --format={{.State.Status}}
I0717 20:24:02.760256  935807 ssh_runner.go:195] Run: systemctl --version
I0717 20:24:02.760308  935807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-949323
I0717 20:24:02.779232  935807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33730 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/functional-949323/id_rsa Username:docker}
I0717 20:24:02.874619  935807 ssh_runner.go:195] Run: sudo crictl images --output json
2023/07/17 20:24:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-949323 image ls --format json --alsologtostderr:
[{"id":"sha256:ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"28214546"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:b14a18b1a682b123d4cdfb17e6ced91eaeab1fb81d0a5539500c4aaf2f935032","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-949323"],"size":"1006"},{"id":"sha256:66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6"],"repoTags":["docker.io/library/nginx:alpine"],"size":"16359946"},{"id":"sha256:c4ce61811c4f3c110561ac856ecdbc1a17f50877446437a910747d9e75155a31","repoDigests":[],"repoTags":["localhost/
my-image:functional-949323"],"size":"830917"},{"id":"sha256:39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":["registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"30386419"},{"id":"sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"25334607"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e
9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"80665728"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":["registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"21369271"},{"id":"sha256:bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":["reg
istry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"16549864"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef"],"repoTags":["docker.io
/library/nginx:latest"],"size":"67301964"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-949323 image ls --format json --alsologtostderr:
I0717 20:24:02.496244  935781 out.go:296] Setting OutFile to fd 1 ...
I0717 20:24:02.496468  935781 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:24:02.496513  935781 out.go:309] Setting ErrFile to fd 2...
I0717 20:24:02.496541  935781 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:24:02.496932  935781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
I0717 20:24:02.497574  935781 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:24:02.497737  935781 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:24:02.498238  935781 cli_runner.go:164] Run: docker container inspect functional-949323 --format={{.State.Status}}
I0717 20:24:02.516794  935781 ssh_runner.go:195] Run: systemctl --version
I0717 20:24:02.516847  935781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-949323
I0717 20:24:02.536196  935781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33730 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/functional-949323/id_rsa Username:docker}
I0717 20:24:02.630810  935781 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-949323 image ls --format yaml --alsologtostderr:
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "25334607"
- id: sha256:b14a18b1a682b123d4cdfb17e6ced91eaeab1fb81d0a5539500c4aaf2f935032
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-949323
size: "1006"
- id: sha256:66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
repoTags:
- docker.io/library/nginx:alpine
size: "16359946"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "28214546"
- id: sha256:bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "16549864"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
repoTags:
- docker.io/library/nginx:latest
size: "67301964"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "80665728"
- id: sha256:39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "30386419"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "21369271"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-949323 image ls --format yaml --alsologtostderr:
I0717 20:23:58.547541  935447 out.go:296] Setting OutFile to fd 1 ...
I0717 20:23:58.547807  935447 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:23:58.547846  935447 out.go:309] Setting ErrFile to fd 2...
I0717 20:23:58.547865  935447 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:23:58.548185  935447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
I0717 20:23:58.548827  935447 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:23:58.549029  935447 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:23:58.549550  935447 cli_runner.go:164] Run: docker container inspect functional-949323 --format={{.State.Status}}
I0717 20:23:58.580064  935447 ssh_runner.go:195] Run: systemctl --version
I0717 20:23:58.580117  935447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-949323
I0717 20:23:58.605009  935447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33730 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/functional-949323/id_rsa Username:docker}
I0717 20:23:58.702972  935447 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 ssh pgrep buildkitd: exit status 1 (363.244521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image build -t localhost/my-image:functional-949323 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 image build -t localhost/my-image:functional-949323 testdata/build --alsologtostderr: (3.065916269s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-949323 image build -t localhost/my-image:functional-949323 testdata/build --alsologtostderr:
I0717 20:23:59.199711  935525 out.go:296] Setting OutFile to fd 1 ...
I0717 20:23:59.200521  935525 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:23:59.200534  935525 out.go:309] Setting ErrFile to fd 2...
I0717 20:23:59.200540  935525 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 20:23:59.200821  935525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
I0717 20:23:59.201501  935525 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:23:59.202089  935525 config.go:182] Loaded profile config "functional-949323": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
I0717 20:23:59.202709  935525 cli_runner.go:164] Run: docker container inspect functional-949323 --format={{.State.Status}}
I0717 20:23:59.221190  935525 ssh_runner.go:195] Run: systemctl --version
I0717 20:23:59.221243  935525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-949323
I0717 20:23:59.240972  935525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33730 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/functional-949323/id_rsa Username:docker}
I0717 20:23:59.343399  935525 build_images.go:151] Building image from path: /tmp/build.3496696629.tar
I0717 20:23:59.343470  935525 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 20:23:59.356243  935525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3496696629.tar
I0717 20:23:59.362301  935525 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3496696629.tar: stat -c "%s %y" /var/lib/minikube/build/build.3496696629.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3496696629.tar': No such file or directory
I0717 20:23:59.362336  935525 ssh_runner.go:362] scp /tmp/build.3496696629.tar --> /var/lib/minikube/build/build.3496696629.tar (3072 bytes)
I0717 20:23:59.395034  935525 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3496696629
I0717 20:23:59.407412  935525 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3496696629 -xf /var/lib/minikube/build/build.3496696629.tar
I0717 20:23:59.421256  935525 containerd.go:378] Building image: /var/lib/minikube/build/build.3496696629
I0717 20:23:59.421366  935525 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3496696629 --local dockerfile=/var/lib/minikube/build/build.3496696629 --output type=image,name=localhost/my-image:functional-949323
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:813db97e96684f60e557c875472e74f2b377fea52ded6d969c771df9b7588d36 0.0s done
#8 exporting config sha256:c4ce61811c4f3c110561ac856ecdbc1a17f50877446437a910747d9e75155a31 0.0s done
#8 naming to localhost/my-image:functional-949323 done
#8 DONE 0.2s
I0717 20:24:02.164933  935525 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3496696629 --local dockerfile=/var/lib/minikube/build/build.3496696629 --output type=image,name=localhost/my-image:functional-949323: (2.743530619s)
I0717 20:24:02.165017  935525 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3496696629
I0717 20:24:02.176670  935525 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3496696629.tar
I0717 20:24:02.188069  935525 build_images.go:207] Built localhost/my-image:functional-949323 from /tmp/build.3496696629.tar
I0717 20:24:02.188098  935525 build_images.go:123] succeeded building to: functional-949323
I0717 20:24:02.188103  935525 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.609969521s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-949323
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-949323 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-949323 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-h7scb" [4c5f0d4b-6900-44bf-b44f-faafbdb0c533] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-h7scb" [4c5f0d4b-6900-44bf-b44f-faafbdb0c533] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.041306862s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 service list -o json
functional_test.go:1493: Took "458.568539ms" to run "out/minikube-linux-arm64 -p functional-949323 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30253
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30253
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-949323 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-949323 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-949323 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 932172: os: process already finished
helpers_test.go:502: unable to terminate pid 932031: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-949323 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-949323 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-949323 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f978ac48-67b0-4ea9-a865-727bc8c18832] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f978ac48-67b0-4ea9-a865-727bc8c18832] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.020791569s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image rm gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-949323
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 image save --daemon gcr.io/google-containers/addon-resizer:functional-949323 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-949323
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-949323 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.168.113 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-949323 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "369.643826ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "61.408507ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "372.61706ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "118.207722ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdany-port2183771552/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689625424064230457" to /tmp/TestFunctionalparallelMountCmdany-port2183771552/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689625424064230457" to /tmp/TestFunctionalparallelMountCmdany-port2183771552/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689625424064230457" to /tmp/TestFunctionalparallelMountCmdany-port2183771552/001/test-1689625424064230457
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (512.787889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 20:23 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 20:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 20:23 test-1689625424064230457
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh cat /mount-9p/test-1689625424064230457
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-949323 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [36f26f88-7302-4a0f-81d7-30ab6e29e61d] Pending
helpers_test.go:344: "busybox-mount" [36f26f88-7302-4a0f-81d7-30ab6e29e61d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [36f26f88-7302-4a0f-81d7-30ab6e29e61d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [36f26f88-7302-4a0f-81d7-30ab6e29e61d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.008308268s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-949323 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdany-port2183771552/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdspecific-port2529098964/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (546.312129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdspecific-port2529098964/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-949323 ssh "sudo umount -f /mount-9p": exit status 1 (418.849587ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-949323 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdspecific-port2529098964/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T" /mount1: (1.328554744s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-949323 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-949323 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-949323 /tmp/TestFunctionalparallelMountCmdVerifyCleanup115901451/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-949323
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-949323
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-949323
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (91.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-786531 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0717 20:25:00.945154  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-786531 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m31.549811746s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (91.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-786531 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-786531 addons enable ingress --alsologtostderr -v=5: (9.60397559s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-786531 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-258873 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0717 20:27:17.100207  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-258873 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (53.42093583s)
--- PASS: TestJSONOutput/start/Command (53.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-258873 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-258873 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-258873 --output=json --user=testUser
E0717 20:27:44.785426  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-258873 --output=json --user=testUser: (5.872458345s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-899462 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-899462 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.028015ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a9838e00-19ee-41ae-b09f-f3ce55efeefc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-899462] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"10002af8-b44b-45d0-8ecd-6024e49cda59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"26afc599-5972-446f-84ab-54efbdd0e062","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"057b216a-1ed2-467c-bdac-cd98e957244c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig"}}
	{"specversion":"1.0","id":"01d95537-4bd8-452e-a9bb-3d7ab64fd1b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube"}}
	{"specversion":"1.0","id":"745e8a66-b316-4396-be17-f9e45be61a70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9cfbd520-1eb4-4b11-92f2-2f45b10c5fa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1caa88d2-4c6a-4eed-b020-ad14aa2891ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-899462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-899462
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-459196 --network=
E0717 20:28:10.430468  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:10.435774  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:10.446026  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:10.466265  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:10.506507  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:10.586783  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:10.747789  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:11.068647  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:11.709542  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:12.990515  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:15.550722  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:20.671542  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:28:30.912613  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-459196 --network=: (44.352675628s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-459196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-459196
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-459196: (2.173670143s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.56s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-579449 --network=bridge
E0717 20:28:51.392907  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-579449 --network=bridge: (31.844655045s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-579449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-579449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-579449: (2.017645961s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.89s)

                                                
                                    
x
+
TestKicExistingNetwork (36.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-305173 --network=existing-network
E0717 20:29:32.353124  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-305173 --network=existing-network: (34.794631367s)
helpers_test.go:175: Cleaning up "existing-network-305173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-305173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-305173: (1.991065398s)
--- PASS: TestKicExistingNetwork (36.95s)

                                                
                                    
x
+
TestKicCustomSubnet (35.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-779570 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-779570 --subnet=192.168.60.0/24: (33.59106129s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-779570 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-779570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-779570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-779570: (2.175866715s)
--- PASS: TestKicCustomSubnet (35.79s)

                                                
                                    
x
+
TestKicStaticIP (36.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-311710 --static-ip=192.168.200.200
E0717 20:30:48.696341  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:48.701686  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:48.711975  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:48.732274  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:48.772546  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:48.852813  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:49.013197  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:49.333716  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:49.974650  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:51.254943  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:53.816896  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:30:54.273511  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:30:58.937575  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-311710 --static-ip=192.168.200.200: (34.78609228s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-311710 ip
helpers_test.go:175: Cleaning up "static-ip-311710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-311710
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-311710: (1.79072777s)
--- PASS: TestKicStaticIP (36.74s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-938525 --driver=docker  --container-runtime=containerd
E0717 20:31:09.177869  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:31:29.659063  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-938525 --driver=docker  --container-runtime=containerd: (35.020391905s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-941790 --driver=docker  --container-runtime=containerd
E0717 20:32:10.619212  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-941790 --driver=docker  --container-runtime=containerd: (33.479688209s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-938525
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-941790
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-941790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-941790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-941790: (1.953148655s)
helpers_test.go:175: Cleaning up "first-938525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-938525
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-938525: (1.985881465s)
--- PASS: TestMinikubeProfile (73.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-087034 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0717 20:32:17.100213  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-087034 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.368031642s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-087034 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-089282 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-089282 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.818652525s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-089282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-087034 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-087034 --alsologtostderr -v=5: (1.711264318s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-089282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-089282
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-089282: (1.222880427s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-089282
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-089282: (6.74462583s)
--- PASS: TestMountStart/serial/RestartStopped (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-089282 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-526341 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0717 20:33:10.430639  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:33:32.540381  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:33:38.113868  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-526341 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.437294899s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-526341 -- rollout status deployment/busybox: (2.829328635s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-2n4qg -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-qbzpm -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-2n4qg -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-qbzpm -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-2n4qg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-qbzpm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-2n4qg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-2n4qg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-qbzpm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-526341 -- exec busybox-67b7f59bb-qbzpm -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-526341 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-526341 -v 3 --alsologtostderr: (18.161721122s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.91s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp testdata/cp-test.txt multinode-526341:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3123239103/001/cp-test_multinode-526341.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341:/home/docker/cp-test.txt multinode-526341-m02:/home/docker/cp-test_multinode-526341_multinode-526341-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m02 "sudo cat /home/docker/cp-test_multinode-526341_multinode-526341-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341:/home/docker/cp-test.txt multinode-526341-m03:/home/docker/cp-test_multinode-526341_multinode-526341-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m03 "sudo cat /home/docker/cp-test_multinode-526341_multinode-526341-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp testdata/cp-test.txt multinode-526341-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3123239103/001/cp-test_multinode-526341-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341-m02:/home/docker/cp-test.txt multinode-526341:/home/docker/cp-test_multinode-526341-m02_multinode-526341.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341 "sudo cat /home/docker/cp-test_multinode-526341-m02_multinode-526341.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341-m02:/home/docker/cp-test.txt multinode-526341-m03:/home/docker/cp-test_multinode-526341-m02_multinode-526341-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m03 "sudo cat /home/docker/cp-test_multinode-526341-m02_multinode-526341-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp testdata/cp-test.txt multinode-526341-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3123239103/001/cp-test_multinode-526341-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341-m03:/home/docker/cp-test.txt multinode-526341:/home/docker/cp-test_multinode-526341-m03_multinode-526341.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341 "sudo cat /home/docker/cp-test_multinode-526341-m03_multinode-526341.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 cp multinode-526341-m03:/home/docker/cp-test.txt multinode-526341-m02:/home/docker/cp-test_multinode-526341-m03_multinode-526341-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 ssh -n multinode-526341-m02 "sudo cat /home/docker/cp-test_multinode-526341-m03_multinode-526341-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-526341 node stop m03: (1.252100296s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-526341 status: exit status 7 (570.471494ms)

                                                
                                                
-- stdout --
	multinode-526341
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-526341-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-526341-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr: exit status 7 (565.633437ms)

                                                
                                                
-- stdout --
	multinode-526341
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-526341-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-526341-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 20:34:38.612219  982899 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:34:38.612452  982899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:34:38.612483  982899 out.go:309] Setting ErrFile to fd 2...
	I0717 20:34:38.612504  982899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:34:38.612787  982899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:34:38.613041  982899 out.go:303] Setting JSON to false
	I0717 20:34:38.613168  982899 mustload.go:65] Loading cluster: multinode-526341
	I0717 20:34:38.613284  982899 notify.go:220] Checking for updates...
	I0717 20:34:38.614381  982899 config.go:182] Loaded profile config "multinode-526341": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:34:38.614416  982899 status.go:255] checking status of multinode-526341 ...
	I0717 20:34:38.615486  982899 cli_runner.go:164] Run: docker container inspect multinode-526341 --format={{.State.Status}}
	I0717 20:34:38.640398  982899 status.go:330] multinode-526341 host status = "Running" (err=<nil>)
	I0717 20:34:38.640425  982899 host.go:66] Checking if "multinode-526341" exists ...
	I0717 20:34:38.640723  982899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-526341
	I0717 20:34:38.658942  982899 host.go:66] Checking if "multinode-526341" exists ...
	I0717 20:34:38.659336  982899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:34:38.659390  982899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-526341
	I0717 20:34:38.690885  982899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33795 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/multinode-526341/id_rsa Username:docker}
	I0717 20:34:38.783964  982899 ssh_runner.go:195] Run: systemctl --version
	I0717 20:34:38.789761  982899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:34:38.804243  982899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:34:38.882559  982899 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-07-17 20:34:38.868623818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:34:38.883148  982899 kubeconfig.go:92] found "multinode-526341" server: "https://192.168.58.2:8443"
	I0717 20:34:38.883171  982899 api_server.go:166] Checking apiserver status ...
	I0717 20:34:38.883228  982899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 20:34:38.897368  982899 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1312/cgroup
	I0717 20:34:38.909257  982899 api_server.go:182] apiserver freezer: "8:freezer:/docker/c33c99fdafb68588caab87028396fab5fe876faa5a246254d750aa944a281204/kubepods/burstable/pod37876e24f99d4ed2b764c639b5927037/f054ca7f6778dc03d162ea9cbe93155d8044f76066f24c029711923d6b3ed949"
	I0717 20:34:38.909335  982899 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c33c99fdafb68588caab87028396fab5fe876faa5a246254d750aa944a281204/kubepods/burstable/pod37876e24f99d4ed2b764c639b5927037/f054ca7f6778dc03d162ea9cbe93155d8044f76066f24c029711923d6b3ed949/freezer.state
	I0717 20:34:38.920220  982899 api_server.go:204] freezer state: "THAWED"
	I0717 20:34:38.920248  982899 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 20:34:38.929916  982899 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 20:34:38.929947  982899 status.go:421] multinode-526341 apiserver status = Running (err=<nil>)
	I0717 20:34:38.929964  982899 status.go:257] multinode-526341 status: &{Name:multinode-526341 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 20:34:38.929987  982899 status.go:255] checking status of multinode-526341-m02 ...
	I0717 20:34:38.930295  982899 cli_runner.go:164] Run: docker container inspect multinode-526341-m02 --format={{.State.Status}}
	I0717 20:34:38.951615  982899 status.go:330] multinode-526341-m02 host status = "Running" (err=<nil>)
	I0717 20:34:38.951644  982899 host.go:66] Checking if "multinode-526341-m02" exists ...
	I0717 20:34:38.951970  982899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-526341-m02
	I0717 20:34:38.971407  982899 host.go:66] Checking if "multinode-526341-m02" exists ...
	I0717 20:34:38.971878  982899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 20:34:38.971934  982899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-526341-m02
	I0717 20:34:38.993164  982899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33800 SSHKeyPath:/home/jenkins/minikube-integration/16890-898608/.minikube/machines/multinode-526341-m02/id_rsa Username:docker}
	I0717 20:34:39.088156  982899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 20:34:39.104043  982899 status.go:257] multinode-526341-m02 status: &{Name:multinode-526341-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 20:34:39.104078  982899 status.go:255] checking status of multinode-526341-m03 ...
	I0717 20:34:39.104409  982899 cli_runner.go:164] Run: docker container inspect multinode-526341-m03 --format={{.State.Status}}
	I0717 20:34:39.124516  982899 status.go:330] multinode-526341-m03 host status = "Stopped" (err=<nil>)
	I0717 20:34:39.124548  982899 status.go:343] host is not running, skipping remaining checks
	I0717 20:34:39.124555  982899 status.go:257] multinode-526341-m03 status: &{Name:multinode-526341-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (23.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-526341 node start m03 --alsologtostderr: (22.959279779s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (23.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (140.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-526341
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-526341
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-526341: (25.306752967s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-526341 --wait=true -v=8 --alsologtostderr
E0717 20:35:48.696740  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:36:16.381465  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:37:17.100241  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-526341 --wait=true -v=8 --alsologtostderr: (1m54.905974356s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-526341
--- PASS: TestMultiNode/serial/RestartKeepsNodes (140.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-526341 node delete m03: (4.36256155s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-526341 stop: (24.212236228s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-526341 status: exit status 7 (96.957891ms)

                                                
                                                
-- stdout --
	multinode-526341
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-526341-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr: exit status 7 (95.895553ms)

                                                
                                                
-- stdout --
	multinode-526341
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-526341-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 20:37:52.820253  991541 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:37:52.820413  991541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:37:52.820421  991541 out.go:309] Setting ErrFile to fd 2...
	I0717 20:37:52.820426  991541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:37:52.820689  991541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:37:52.820921  991541 out.go:303] Setting JSON to false
	I0717 20:37:52.820968  991541 mustload.go:65] Loading cluster: multinode-526341
	I0717 20:37:52.821070  991541 notify.go:220] Checking for updates...
	I0717 20:37:52.821361  991541 config.go:182] Loaded profile config "multinode-526341": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:37:52.821379  991541 status.go:255] checking status of multinode-526341 ...
	I0717 20:37:52.825309  991541 cli_runner.go:164] Run: docker container inspect multinode-526341 --format={{.State.Status}}
	I0717 20:37:52.843257  991541 status.go:330] multinode-526341 host status = "Stopped" (err=<nil>)
	I0717 20:37:52.843290  991541 status.go:343] host is not running, skipping remaining checks
	I0717 20:37:52.843299  991541 status.go:257] multinode-526341 status: &{Name:multinode-526341 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 20:37:52.843333  991541 status.go:255] checking status of multinode-526341-m02 ...
	I0717 20:37:52.843650  991541 cli_runner.go:164] Run: docker container inspect multinode-526341-m02 --format={{.State.Status}}
	I0717 20:37:52.867419  991541 status.go:330] multinode-526341-m02 host status = "Stopped" (err=<nil>)
	I0717 20:37:52.867440  991541 status.go:343] host is not running, skipping remaining checks
	I0717 20:37:52.867446  991541 status.go:257] multinode-526341-m02 status: &{Name:multinode-526341-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-526341 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0717 20:38:10.429510  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 20:38:40.146094  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-526341 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m45.592436701s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-526341 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-526341
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-526341-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-526341-m02 --driver=docker  --container-runtime=containerd: exit status 14 (85.025532ms)

                                                
                                                
-- stdout --
	* [multinode-526341-m02] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-526341-m02' is duplicated with machine name 'multinode-526341-m02' in profile 'multinode-526341'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-526341-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-526341-m03 --driver=docker  --container-runtime=containerd: (43.015657591s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-526341
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-526341: exit status 80 (380.058465ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-526341
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-526341-m03 already exists in multinode-526341-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-526341-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-526341-m03: (2.016710937s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.55s)

                                                
                                    
x
+
TestPreload (149.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-157937 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0717 20:40:48.697086  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-157937 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m17.088932968s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-157937 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-157937 image pull gcr.io/k8s-minikube/busybox: (1.387701428s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-157937
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-157937: (5.81267621s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-157937 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0717 20:42:17.100267  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-157937 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m2.260965743s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-157937 image list
helpers_test.go:175: Cleaning up "test-preload-157937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-157937
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-157937: (2.406145775s)
--- PASS: TestPreload (149.22s)

                                                
                                    
x
+
TestScheduledStopUnix (119.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-183475 --memory=2048 --driver=docker  --container-runtime=containerd
E0717 20:43:10.431106  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-183475 --memory=2048 --driver=docker  --container-runtime=containerd: (42.492582382s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-183475 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-183475 -n scheduled-stop-183475
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-183475 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-183475 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-183475 -n scheduled-stop-183475
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-183475
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-183475 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 20:44:33.474148  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-183475
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-183475: exit status 7 (74.758799ms)

                                                
                                                
-- stdout --
	scheduled-stop-183475
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-183475 -n scheduled-stop-183475
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-183475 -n scheduled-stop-183475: exit status 7 (71.820415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-183475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-183475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-183475: (5.139831746s)
--- PASS: TestScheduledStopUnix (119.28s)

                                                
                                    
x
+
TestInsufficientStorage (10.86s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-446661 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-446661 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.303430044s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2650108-80ba-4668-8d16-d85c68d036c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-446661] minikube v1.30.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"39355486-2d84-4749-92db-ba4ca4ea59a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"bece6ff9-3388-4d16-b3b7-984d333c4d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ade624e2-5a43-4192-9ba4-178b78bee403","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig"}}
	{"specversion":"1.0","id":"2604e6e5-e198-47fe-a249-36bb2e09efbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube"}}
	{"specversion":"1.0","id":"a5925492-b289-4d2a-a249-fde13cb6e365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"42a031b2-5f0b-48b4-99e4-77db39d93242","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"29adf487-ce49-448a-96b6-4b55232c6fef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4db8cda6-0996-462b-be1a-7b79ee3a2449","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"27987d16-dcd9-460a-9393-9bc102ee3308","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"81bd6406-4d11-437e-b4e8-ea6f9bd64ed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"91d8b099-ef52-4a70-b5e7-e09d60732044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-446661 in cluster insufficient-storage-446661","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"af415957-f736-46ac-aff4-22a59f762297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd0f2740-daba-4bca-a202-d3c2dd6d4cb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2589662-f1e9-4666-ba3f-c6eb6bc506bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-446661 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-446661 --output=json --layout=cluster: exit status 7 (300.325607ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446661","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446661","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 20:45:05.864657 1008941 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-446661" does not appear in /home/jenkins/minikube-integration/16890-898608/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-446661 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-446661 --output=json --layout=cluster: exit status 7 (329.984347ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-446661","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-446661","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 20:45:06.194929 1008994 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-446661" does not appear in /home/jenkins/minikube-integration/16890-898608/kubeconfig
	E0717 20:45:06.207544 1008994 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/insufficient-storage-446661/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-446661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-446661
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-446661: (1.929529085s)
--- PASS: TestInsufficientStorage (10.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (136.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.421762872.exe start -p running-upgrade-646434 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0717 20:52:17.099827  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:53:10.430479  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.22.0.421762872.exe start -p running-upgrade-646434 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m24.128860731s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-646434 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:142: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-646434 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (46.774883962s)
helpers_test.go:175: Cleaning up "running-upgrade-646434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-646434
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-646434: (4.845565769s)
--- PASS: TestRunningBinaryUpgrade (136.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (440.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0717 20:47:11.741852  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 20:47:17.100198  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m15.071065309s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-854415
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-854415: (11.810159968s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-854415 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-854415 status --format={{.Host}}: exit status 7 (75.652286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0717 20:48:10.430312  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m16.14141548s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-854415 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (114.254134ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854415] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-854415
	    minikube start -p kubernetes-upgrade-854415 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8544152 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-854415 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-854415 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.620383069s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-854415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-854415
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-854415: (2.692908578s)
--- PASS: TestKubernetesUpgrade (440.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476088 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-476088 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (77.153476ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-476088] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476088 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476088 --driver=docker  --container-runtime=containerd: (36.90458487s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-476088 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476088 --no-kubernetes --driver=docker  --container-runtime=containerd
E0717 20:45:48.696442  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476088 --no-kubernetes --driver=docker  --container-runtime=containerd: (27.286585051s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-476088 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-476088 status -o json: exit status 2 (685.654898ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-476088","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-476088
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-476088: (2.22647876s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476088 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476088 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.057955716s)
--- PASS: TestNoKubernetes/serial/Start (9.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-476088 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-476088 "sudo systemctl is-active --quiet service kubelet": exit status 1 (432.889362ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-476088
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-476088: (1.234976983s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-476088 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-476088 --driver=docker  --container-runtime=containerd: (7.325705301s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-476088 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-476088 "sudo systemctl is-active --quiet service kubelet": exit status 1 (376.200249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.3431960980.exe start -p stopped-upgrade-964950 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.22.0.3431960980.exe start -p stopped-upgrade-964950 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m22.570122512s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.22.0.3431960980.exe -p stopped-upgrade-964950 stop
E0717 20:50:48.696253  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.22.0.3431960980.exe -p stopped-upgrade-964950 stop: (20.106898119s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-964950 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-964950 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.414727093s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-964950
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-964950: (1.151830837s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (65.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-366551 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-366551 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m5.270639663s)
--- PASS: TestPause/serial/Start (65.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-414377 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-414377 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (223.870279ms)

                                                
                                                
-- stdout --
	* [false-414377] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 20:54:54.528760 1044943 out.go:296] Setting OutFile to fd 1 ...
	I0717 20:54:54.529097 1044943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:54:54.529130 1044943 out.go:309] Setting ErrFile to fd 2...
	I0717 20:54:54.529167 1044943 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 20:54:54.529511 1044943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-898608/.minikube/bin
	I0717 20:54:54.530841 1044943 out.go:303] Setting JSON to false
	I0717 20:54:54.532265 1044943 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16642,"bootTime":1689610653,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0717 20:54:54.532398 1044943 start.go:138] virtualization:  
	I0717 20:54:54.535277 1044943 out.go:177] * [false-414377] minikube v1.30.1 on Ubuntu 20.04 (arm64)
	I0717 20:54:54.538389 1044943 notify.go:220] Checking for updates...
	I0717 20:54:54.541305 1044943 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 20:54:54.542900 1044943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 20:54:54.544690 1044943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-898608/kubeconfig
	I0717 20:54:54.546712 1044943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-898608/.minikube
	I0717 20:54:54.548623 1044943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0717 20:54:54.551133 1044943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 20:54:54.553480 1044943 config.go:182] Loaded profile config "pause-366551": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.27.3
	I0717 20:54:54.553661 1044943 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 20:54:54.579725 1044943 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 20:54:54.579853 1044943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 20:54:54.677250 1044943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-17 20:54:54.667471589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215171072 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1]] Warnings:<nil>}}
	I0717 20:54:54.677357 1044943 docker.go:294] overlay module found
	I0717 20:54:54.680961 1044943 out.go:177] * Using the docker driver based on user configuration
	I0717 20:54:54.682882 1044943 start.go:298] selected driver: docker
	I0717 20:54:54.682898 1044943 start.go:880] validating driver "docker" against <nil>
	I0717 20:54:54.682912 1044943 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 20:54:54.685542 1044943 out.go:177] 
	W0717 20:54:54.687635 1044943 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0717 20:54:54.690325 1044943 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-414377 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-414377" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Jul 2023 20:54:48 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.67.2:8443
name: pause-366551
contexts:
- context:
cluster: pause-366551
extensions:
- extension:
last-update: Mon, 17 Jul 2023 20:54:48 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: pause-366551
name: pause-366551
current-context: pause-366551
kind: Config
preferences: {}
users:
- name: pause-366551
user:
client-certificate: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/pause-366551/client.crt
client-key: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/pause-366551/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-414377

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-414377"

                                                
                                                
----------------------- debugLogs end: false-414377 [took: 3.398672018s] --------------------------------
helpers_test.go:175: Cleaning up "false-414377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-414377
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (15.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-366551 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-366551 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.567950137s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (15.59s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-366551 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-366551 --alsologtostderr -v=5: (1.029985931s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-366551 --output=json --layout=cluster
E0717 20:55:20.147771  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-366551 --output=json --layout=cluster: exit status 2 (516.833049ms)

                                                
                                                
-- stdout --
	{"Name":"pause-366551","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-366551","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.52s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-366551 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-366551 --alsologtostderr -v=5: (1.059635694s)
--- PASS: TestPause/serial/Unpause (1.06s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.26s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-366551 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-366551 --alsologtostderr -v=5: (1.262843037s)
--- PASS: TestPause/serial/PauseAgain (1.26s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-366551 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-366551 --alsologtostderr -v=5: (3.096786117s)
--- PASS: TestPause/serial/DeletePaused (3.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-366551
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-366551: exit status 1 (18.281766ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-366551: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (138.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-309034 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0717 20:57:17.099660  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 20:58:10.431029  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-309034 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m18.615287806s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (138.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-309034 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ac4ea138-65fa-4337-937d-2efe03519ab0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ac4ea138-65fa-4337-937d-2efe03519ab0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.034861262s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-309034 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-309034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-309034 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014137304s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-309034 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-309034 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-309034 --alsologtostderr -v=3: (12.152122172s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309034 -n old-k8s-version-309034
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309034 -n old-k8s-version-309034: exit status 7 (98.621432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-309034 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (676.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-309034 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-309034 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m16.454207302s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-309034 -n old-k8s-version-309034
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (676.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-440201 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-440201 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m10.296250447s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-440201 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2a7d1121-0389-49c3-890f-763df790c3a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2a7d1121-0389-49c3-890f-763df790c3a1] Running
E0717 21:00:48.696503  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.041594026s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-440201 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-440201 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-440201 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.134373438s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-440201 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-440201 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-440201 --alsologtostderr -v=3: (12.160557197s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-440201 -n no-preload-440201
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-440201 -n no-preload-440201: exit status 7 (75.996614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-440201 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (354.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-440201 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 21:01:13.474661  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 21:02:17.100271  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 21:03:10.429878  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 21:03:51.742770  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 21:05:48.696976  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-440201 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m54.279306354s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-440201 -n no-preload-440201
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (354.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nrhpb" [8815bcf6-d889-4dcf-adc6-1242d515a706] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nrhpb" [8815bcf6-d889-4dcf-adc6-1242d515a706] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.024911298s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nrhpb" [8815bcf6-d889-4dcf-adc6-1242d515a706] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008285971s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-440201 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-440201 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-440201 --alsologtostderr -v=1
E0717 21:07:17.100398  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-440201 -n no-preload-440201
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-440201 -n no-preload-440201: exit status 2 (365.649ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-440201 -n no-preload-440201
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-440201 -n no-preload-440201: exit status 2 (362.543557ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-440201 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-440201 -n no-preload-440201
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-440201 -n no-preload-440201
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-617713 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 21:08:10.430015  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-617713 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m36.815556074s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-617713 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [89ae15fc-f4bb-4e19-ab55-52621fa83b0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [89ae15fc-f4bb-4e19-ab55-52621fa83b0c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.027843665s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-617713 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-617713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-617713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.178958447s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-617713 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-617713 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-617713 --alsologtostderr -v=3: (12.141680184s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-617713 -n embed-certs-617713
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-617713 -n embed-certs-617713: exit status 7 (79.534791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-617713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (349.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-617713 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-617713 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m49.002877559s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-617713 -n embed-certs-617713
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (349.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wzsdj" [4f35e298-1ae0-456d-8d27-2e8ba696b75f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.028973767s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wzsdj" [4f35e298-1ae0-456d-8d27-2e8ba696b75f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008612393s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-309034 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-309034 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-309034 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309034 -n old-k8s-version-309034
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309034 -n old-k8s-version-309034: exit status 2 (355.837306ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-309034 -n old-k8s-version-309034
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-309034 -n old-k8s-version-309034: exit status 2 (427.370899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-309034 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-309034 -n old-k8s-version-309034
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-309034 -n old-k8s-version-309034
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-332286 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 21:10:43.618615  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:43.623862  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:43.634271  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:43.654502  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:43.694801  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:43.774980  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:43.935280  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:44.255734  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:44.895901  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:46.176744  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:48.696266  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 21:10:48.737447  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:10:53.857631  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:11:04.098061  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:11:24.578318  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:12:00.170138  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
E0717 21:12:05.538552  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-332286 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (1m29.724271362s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-332286 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7a8a053e-ba2e-4d0c-8730-8d167494cb3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7a8a053e-ba2e-4d0c-8730-8d167494cb3d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.0314998s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-332286 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-332286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-332286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.210693164s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-332286 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-332286 --alsologtostderr -v=3
E0717 21:12:17.100383  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/addons-911602/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-332286 --alsologtostderr -v=3: (12.172530086s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286: exit status 7 (90.555179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-332286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-332286 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 21:13:10.429488  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 21:13:27.459656  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:13:41.647428  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:41.652738  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:41.663019  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:41.683352  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:41.723627  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:41.803973  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:41.965096  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:42.285559  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:42.926559  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:44.207031  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:46.767571  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:13:51.888285  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:14:02.129095  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:14:22.609493  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
E0717 21:15:03.570093  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-332286 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (5m46.143394129s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-x2hch" [b5e5b411-07dc-403e-8185-abb8dbc64391] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-x2hch" [b5e5b411-07dc-403e-8185-abb8dbc64391] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.025541616s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-x2hch" [b5e5b411-07dc-403e-8185-abb8dbc64391] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008628706s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-617713 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-617713 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-617713 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-617713 -n embed-certs-617713
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-617713 -n embed-certs-617713: exit status 2 (359.846514ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-617713 -n embed-certs-617713
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-617713 -n embed-certs-617713: exit status 2 (361.896697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-617713 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-617713 -n embed-certs-617713
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-617713 -n embed-certs-617713
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-615445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
E0717 21:15:43.618381  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:15:48.697222  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
E0717 21:16:11.300300  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:16:25.491079  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-615445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (50.233937146s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-615445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-615445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.442033032s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-615445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-615445 --alsologtostderr -v=3: (1.29999077s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-615445 -n newest-cni-615445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-615445 -n newest-cni-615445: exit status 7 (70.565456ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-615445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-615445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-615445 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.27.3: (41.586943164s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-615445 -n newest-cni-615445
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-615445 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-615445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-615445 -n newest-cni-615445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-615445 -n newest-cni-615445: exit status 2 (365.111803ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-615445 -n newest-cni-615445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-615445 -n newest-cni-615445: exit status 2 (347.572799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-615445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-615445 -n newest-cni-615445
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-615445 -n newest-cni-615445
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0717 21:17:53.475369  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
E0717 21:18:10.430066  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/functional-949323/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m32.829625019s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5gfbm" [6c3c7ecf-d45a-4ca8-97b8-53128e20a929] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5gfbm" [6c3c7ecf-d45a-4ca8-97b8-53128e20a929] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.036577735s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5gfbm" [6c3c7ecf-d45a-4ca8-97b8-53128e20a929] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016225845s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-332286 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-332286 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-332286 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286: exit status 2 (376.36987ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286: exit status 2 (358.116835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-332286 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-332286 -n default-k8s-diff-port-332286
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0717 21:18:41.646937  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m23.193625647s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-414377 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-414377 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-nl9gf" [abebec8b-09f9-44e8-9135-6b71253dccf2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-nl9gf" [abebec8b-09f9-44e8-9135-6b71253dccf2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.01217798s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-414377 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.167261084s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dvhdv" [6a1d3d49-e0e2-413a-a643-0cdc6d391e29] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.031174421s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-414377 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-414377 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-g94hv" [20560b63-bd40-4dae-ad22-51ab8011303f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-g94hv" [20560b63-bd40-4dae-ad22-51ab8011303f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00850879s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-414377 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-btdz2" [430679bb-c7f3-4292-be73-4f76c2f61eae] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.045391941s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m8.608026624s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-414377 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-414377 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qzgq2" [9c3b3a82-8678-4b16-957d-6d74caef01a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 21:20:43.618166  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/no-preload-440201/client.crt: no such file or directory
E0717 21:20:48.696647  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/ingress-addon-legacy-786531/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-qzgq2" [9c3b3a82-8678-4b16-957d-6d74caef01a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.028813558s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-414377 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m29.3111501s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-414377 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-414377 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-2gwgg" [9c4e2d23-7cc9-4875-beb9-20ba82a514b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-2gwgg" [9c4e2d23-7cc9-4875-beb9-20ba82a514b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011473371s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-414377 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0717 21:22:27.902895  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/default-k8s-diff-port-332286/client.crt: no such file or directory
E0717 21:22:48.383586  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/default-k8s-diff-port-332286/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m8.46455089s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-414377 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-414377 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pfshv" [c47eb82f-e05e-4df6-96d4-48608ce56698] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-pfshv" [c47eb82f-e05e-4df6-96d4-48608ce56698] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.009862142s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-414377 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-414377 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.97837084s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f7jhv" [7510e6e8-e22d-43e4-889c-eb901c61801e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.035684285s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-414377 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-414377 replace --force -f testdata/netcat-deployment.yaml
E0717 21:23:41.650744  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/old-k8s-version-309034/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-s5m4l" [1c44e48f-3f30-4f15-8f2c-5cd9efe144a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-s5m4l" [1c44e48f-3f30-4f15-8f2c-5cd9efe144a4] Running
E0717 21:23:51.367063  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
E0717 21:23:51.372325  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
E0717 21:23:51.382577  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
E0717 21:23:51.402860  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
E0717 21:23:51.443127  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
E0717 21:23:51.523428  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009390875s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-414377 exec deployment/netcat -- nslookup kubernetes.default
E0717 21:23:51.684139  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0717 21:23:52.004547  903997 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/auto-414377/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-414377 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-414377 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kklfl" [db352428-ded4-4200-8c23-7a30bc0b8354] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-kklfl" [db352428-ded4-4200-8c23-7a30bc0b8354] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00852433s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-414377 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-414377 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (28/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-219942 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-219942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-219942
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-284152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-284152
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-414377 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-414377" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Jul 2023 20:54:48 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.67.2:8443
name: pause-366551
contexts:
- context:
cluster: pause-366551
extensions:
- extension:
last-update: Mon, 17 Jul 2023 20:54:48 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: pause-366551
name: pause-366551
current-context: pause-366551
kind: Config
preferences: {}
users:
- name: pause-366551
user:
client-certificate: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/pause-366551/client.crt
client-key: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/pause-366551/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-414377

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-414377"

                                                
                                                
----------------------- debugLogs end: kubenet-414377 [took: 3.46207547s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-414377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-414377
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-414377 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-414377" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16890-898608/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Jul 2023 20:54:48 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.67.2:8443
name: pause-366551
contexts:
- context:
cluster: pause-366551
extensions:
- extension:
last-update: Mon, 17 Jul 2023 20:54:48 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: pause-366551
name: pause-366551
current-context: pause-366551
kind: Config
preferences: {}
users:
- name: pause-366551
user:
client-certificate: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/pause-366551/client.crt
client-key: /home/jenkins/minikube-integration/16890-898608/.minikube/profiles/pause-366551/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-414377

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-414377" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-414377"

                                                
                                                
----------------------- debugLogs end: cilium-414377 [took: 4.39734835s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-414377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-414377
--- SKIP: TestNetworkPlugins/group/cilium (4.58s)

                                                
                                    
Copied to clipboard