Test Report: Docker_Linux_crio_arm64 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35410
                    
                

Test fail (2/336)

Order failed test Duration
39 TestAddons/parallel/Ingress 152.18
41 TestAddons/parallel/MetricsServer 286.14
x
+
TestAddons/parallel/Ingress (152.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-014077 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-014077 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-014077 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e3e089fe-0bb0-4634-9baf-504c27b7510c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e3e089fe-0bb0-4634-9baf-504c27b7510c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003764131s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-014077 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.704256271s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-014077 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-014077 addons disable ingress --alsologtostderr -v=1: (7.782911934s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-014077
helpers_test.go:235: (dbg) docker inspect addons-014077:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498",
	        "Created": "2024-07-19T04:30:45.424624926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 444736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T04:30:45.576248748Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2c91a2178aa1acdb3eade350c62303b0cf135b362b91c6aa21cd060c2dbfcac",
	        "ResolvConfPath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/hostname",
	        "HostsPath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/hosts",
	        "LogPath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498-json.log",
	        "Name": "/addons-014077",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-014077:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-014077",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55-init/diff:/var/lib/docker/overlay2/dcda698d7750c866c9c7e796269374bca18e6015fe6311f8c109dc57f1eac077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-014077",
	                "Source": "/var/lib/docker/volumes/addons-014077/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-014077",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-014077",
	                "name.minikube.sigs.k8s.io": "addons-014077",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "182e0f89ccf661fa762f8d194caa1bf945b2b44445e06e7e292c6ae4fc1c63fb",
	            "SandboxKey": "/var/run/docker/netns/182e0f89ccf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-014077": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "dd06f12297e2f764538135a4f71c8ffc11141139f2efc3df4f09c713ecba82d7",
	                    "EndpointID": "669977e504b424d4c980c7c955831ccae23c9fa3a732c705a48fcfaabe9fc350",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-014077",
	                        "b73d64c37adb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-014077 -n addons-014077
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-014077 logs -n 25: (1.498662215s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-521092                                                                     | download-only-521092   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-596201                                                                     | download-only-596201   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-248286                                                                     | download-only-248286   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-521092                                                                     | download-only-521092   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| start   | --download-only -p                                                                          | download-docker-335826 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | download-docker-335826                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-335826                                                                   | download-docker-335826 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-028250   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | binary-mirror-028250                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33677                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-028250                                                                     | binary-mirror-028250   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-014077 --wait=true                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | -p addons-014077                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-014077 ip                                                                            | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | -p addons-014077                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-014077 ssh cat                                                                       | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | /opt/local-path-provisioner/pvc-b5821a76-1b15-48b8-80bb-7ba2cf9bbdd9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| addons  | addons-014077 addons                                                                        | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC | 19 Jul 24 04:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-014077 addons                                                                        | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC | 19 Jul 24 04:35 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC | 19 Jul 24 04:35 UTC |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-014077 ssh curl -s                                                                   | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-014077 ip                                                                            | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:38 UTC | 19 Jul 24 04:38 UTC |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:38 UTC | 19 Jul 24 04:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:38 UTC | 19 Jul 24 04:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:30:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:30:20.802949  444204 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:20.803104  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:20.803113  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:20.803118  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:20.803547  444204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:30:20.804016  444204 out.go:298] Setting JSON to false
	I0719 04:30:20.804969  444204 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7966,"bootTime":1721355455,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 04:30:20.805042  444204 start.go:139] virtualization:  
	I0719 04:30:20.807518  444204 out.go:177] * [addons-014077] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0719 04:30:20.809708  444204 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:30:20.809772  444204 notify.go:220] Checking for updates...
	I0719 04:30:20.813226  444204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:30:20.814832  444204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:30:20.816486  444204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 04:30:20.818398  444204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0719 04:30:20.820394  444204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:30:20.822749  444204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:30:20.846594  444204 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:30:20.846724  444204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:20.909817  444204 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-19 04:30:20.900026691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:20.909927  444204 docker.go:307] overlay module found
	I0719 04:30:20.912152  444204 out.go:177] * Using the docker driver based on user configuration
	I0719 04:30:20.914082  444204 start.go:297] selected driver: docker
	I0719 04:30:20.914097  444204 start.go:901] validating driver "docker" against <nil>
	I0719 04:30:20.914110  444204 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:30:20.916008  444204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:20.965900  444204 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-19 04:30:20.956338267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:20.966081  444204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:30:20.966315  444204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:30:20.968339  444204 out.go:177] * Using Docker driver with root privileges
	I0719 04:30:20.970373  444204 cni.go:84] Creating CNI manager for ""
	I0719 04:30:20.970406  444204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:30:20.970417  444204 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:30:20.970668  444204 start.go:340] cluster config:
	{Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:30:20.973996  444204 out.go:177] * Starting "addons-014077" primary control-plane node in "addons-014077" cluster
	I0719 04:30:20.975824  444204 cache.go:121] Beginning downloading kic base image for docker with crio
	I0719 04:30:20.977567  444204 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 04:30:20.979239  444204 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:20.979264  444204 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 04:30:20.979289  444204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0719 04:30:20.979298  444204 cache.go:56] Caching tarball of preloaded images
	I0719 04:30:20.979378  444204 preload.go:172] Found /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0719 04:30:20.979388  444204 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:30:20.979742  444204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/config.json ...
	I0719 04:30:20.979819  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/config.json: {Name:mkee8ab7b1c9c5d1f3baa8814ec326c921c9f362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:20.994400  444204 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 04:30:20.994546  444204 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 04:30:20.994569  444204 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 04:30:20.994574  444204 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 04:30:20.994584  444204 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 04:30:20.994591  444204 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 04:30:37.894598  444204 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 04:30:37.894638  444204 cache.go:194] Successfully downloaded all kic artifacts
	I0719 04:30:37.894739  444204 start.go:360] acquireMachinesLock for addons-014077: {Name:mk616a464a7e762d13268277321c4ef16174e532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:30:37.894865  444204 start.go:364] duration metric: took 102.439µs to acquireMachinesLock for "addons-014077"
	I0719 04:30:37.894899  444204 start.go:93] Provisioning new machine with config: &{Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:30:37.894993  444204 start.go:125] createHost starting for "" (driver="docker")
	I0719 04:30:37.897357  444204 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0719 04:30:37.897602  444204 start.go:159] libmachine.API.Create for "addons-014077" (driver="docker")
	I0719 04:30:37.897636  444204 client.go:168] LocalClient.Create starting
	I0719 04:30:37.897749  444204 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem
	I0719 04:30:38.308142  444204 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem
	I0719 04:30:38.692879  444204 cli_runner.go:164] Run: docker network inspect addons-014077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0719 04:30:38.707708  444204 cli_runner.go:211] docker network inspect addons-014077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0719 04:30:38.707815  444204 network_create.go:284] running [docker network inspect addons-014077] to gather additional debugging logs...
	I0719 04:30:38.707840  444204 cli_runner.go:164] Run: docker network inspect addons-014077
	W0719 04:30:38.722196  444204 cli_runner.go:211] docker network inspect addons-014077 returned with exit code 1
	I0719 04:30:38.722237  444204 network_create.go:287] error running [docker network inspect addons-014077]: docker network inspect addons-014077: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-014077 not found
	I0719 04:30:38.722251  444204 network_create.go:289] output of [docker network inspect addons-014077]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-014077 not found
	
	** /stderr **
	I0719 04:30:38.722349  444204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 04:30:38.739974  444204 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017badb0}
	I0719 04:30:38.740018  444204 network_create.go:124] attempt to create docker network addons-014077 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0719 04:30:38.740080  444204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-014077 addons-014077
	I0719 04:30:38.812132  444204 network_create.go:108] docker network addons-014077 192.168.49.0/24 created
	I0719 04:30:38.812179  444204 kic.go:121] calculated static IP "192.168.49.2" for the "addons-014077" container
	I0719 04:30:38.812257  444204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0719 04:30:38.826967  444204 cli_runner.go:164] Run: docker volume create addons-014077 --label name.minikube.sigs.k8s.io=addons-014077 --label created_by.minikube.sigs.k8s.io=true
	I0719 04:30:38.842896  444204 oci.go:103] Successfully created a docker volume addons-014077
	I0719 04:30:38.843006  444204 cli_runner.go:164] Run: docker run --rm --name addons-014077-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-014077 --entrypoint /usr/bin/test -v addons-014077:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0719 04:30:40.942642  444204 cli_runner.go:217] Completed: docker run --rm --name addons-014077-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-014077 --entrypoint /usr/bin/test -v addons-014077:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib: (2.099583579s)
	I0719 04:30:40.942673  444204 oci.go:107] Successfully prepared a docker volume addons-014077
	I0719 04:30:40.942698  444204 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:40.942717  444204 kic.go:194] Starting extracting preloaded images to volume ...
	I0719 04:30:40.942809  444204 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-014077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0719 04:30:45.354497  444204 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-014077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir: (4.411645562s)
	I0719 04:30:45.354529  444204 kic.go:203] duration metric: took 4.411809267s to extract preloaded images to volume ...
	W0719 04:30:45.354673  444204 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0719 04:30:45.354788  444204 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0719 04:30:45.408999  444204 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-014077 --name addons-014077 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-014077 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-014077 --network addons-014077 --ip 192.168.49.2 --volume addons-014077:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f
	I0719 04:30:45.731752  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Running}}
	I0719 04:30:45.759468  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:30:45.786413  444204 cli_runner.go:164] Run: docker exec addons-014077 stat /var/lib/dpkg/alternatives/iptables
	I0719 04:30:45.848103  444204 oci.go:144] the created container "addons-014077" has a running status.
	I0719 04:30:45.848133  444204 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa...
	I0719 04:30:46.279285  444204 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0719 04:30:46.305618  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:30:46.330312  444204 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0719 04:30:46.330330  444204 kic_runner.go:114] Args: [docker exec --privileged addons-014077 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0719 04:30:46.395506  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:30:46.423233  444204 machine.go:94] provisionDockerMachine start ...
	I0719 04:30:46.423326  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:46.447546  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:46.447806  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:46.447816  444204 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:30:46.614952  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-014077
	
	I0719 04:30:46.615024  444204 ubuntu.go:169] provisioning hostname "addons-014077"
	I0719 04:30:46.615126  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:46.640795  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:46.641188  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:46.641204  444204 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-014077 && echo "addons-014077" | sudo tee /etc/hostname
	I0719 04:30:46.780377  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-014077
	
	I0719 04:30:46.780464  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:46.799449  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:46.799689  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:46.799706  444204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-014077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-014077/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-014077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:30:46.926490  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:30:46.926517  444204 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19302-437615/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-437615/.minikube}
	I0719 04:30:46.926547  444204 ubuntu.go:177] setting up certificates
	I0719 04:30:46.926563  444204 provision.go:84] configureAuth start
	I0719 04:30:46.926629  444204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-014077
	I0719 04:30:46.944074  444204 provision.go:143] copyHostCerts
	I0719 04:30:46.944163  444204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-437615/.minikube/ca.pem (1078 bytes)
	I0719 04:30:46.944297  444204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-437615/.minikube/cert.pem (1123 bytes)
	I0719 04:30:46.944354  444204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-437615/.minikube/key.pem (1675 bytes)
	I0719 04:30:46.944409  444204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-437615/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca-key.pem org=jenkins.addons-014077 san=[127.0.0.1 192.168.49.2 addons-014077 localhost minikube]
	I0719 04:30:47.276347  444204 provision.go:177] copyRemoteCerts
	I0719 04:30:47.276418  444204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:30:47.276462  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.294117  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.383353  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 04:30:47.408771  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:30:47.432964  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:30:47.457784  444204 provision.go:87] duration metric: took 531.204388ms to configureAuth
	I0719 04:30:47.457814  444204 ubuntu.go:193] setting minikube options for container-runtime
	I0719 04:30:47.458009  444204 config.go:182] Loaded profile config "addons-014077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:30:47.458147  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.474517  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:47.474767  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:47.474790  444204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:30:47.701211  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:30:47.701242  444204 machine.go:97] duration metric: took 1.277985337s to provisionDockerMachine
	I0719 04:30:47.701254  444204 client.go:171] duration metric: took 9.803607835s to LocalClient.Create
	I0719 04:30:47.701266  444204 start.go:167] duration metric: took 9.80366444s to libmachine.API.Create "addons-014077"
	I0719 04:30:47.701273  444204 start.go:293] postStartSetup for "addons-014077" (driver="docker")
	I0719 04:30:47.701284  444204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:30:47.701362  444204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:30:47.701407  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.718679  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.812177  444204 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:30:47.815395  444204 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0719 04:30:47.815480  444204 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0719 04:30:47.815507  444204 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0719 04:30:47.815543  444204 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0719 04:30:47.815574  444204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-437615/.minikube/addons for local assets ...
	I0719 04:30:47.815681  444204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-437615/.minikube/files for local assets ...
	I0719 04:30:47.815744  444204 start.go:296] duration metric: took 114.46386ms for postStartSetup
	I0719 04:30:47.816136  444204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-014077
	I0719 04:30:47.832623  444204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/config.json ...
	I0719 04:30:47.832914  444204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:47.832960  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.849429  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.935084  444204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0719 04:30:47.939554  444204 start.go:128] duration metric: took 10.044544438s to createHost
	I0719 04:30:47.939580  444204 start.go:83] releasing machines lock for "addons-014077", held for 10.044698897s
	I0719 04:30:47.939654  444204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-014077
	I0719 04:30:47.955255  444204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:30:47.955347  444204 ssh_runner.go:195] Run: cat /version.json
	I0719 04:30:47.955380  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.955518  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.979824  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.983706  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:48.190958  444204 ssh_runner.go:195] Run: systemctl --version
	I0719 04:30:48.195452  444204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:30:48.336715  444204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:30:48.341590  444204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:30:48.363714  444204 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0719 04:30:48.363790  444204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:30:48.395762  444204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0719 04:30:48.395783  444204 start.go:495] detecting cgroup driver to use...
	I0719 04:30:48.395816  444204 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 04:30:48.395867  444204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:30:48.413430  444204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:30:48.425421  444204 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:30:48.425541  444204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:30:48.440755  444204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:30:48.455758  444204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:30:48.546854  444204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:30:48.643462  444204 docker.go:233] disabling docker service ...
	I0719 04:30:48.643539  444204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:30:48.664500  444204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:30:48.676433  444204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:30:48.770823  444204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:30:48.867659  444204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:30:48.879768  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:30:48.896659  444204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:30:48.896729  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.907098  444204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:30:48.907178  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.917411  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.927160  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.938148  444204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:30:48.948796  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.959023  444204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.976415  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.986825  444204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:30:48.995565  444204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:30:49.005142  444204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:30:49.085548  444204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:30:49.202816  444204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:30:49.202958  444204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:30:49.206417  444204 start.go:563] Will wait 60s for crictl version
	I0719 04:30:49.206546  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:30:49.209853  444204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:30:49.253589  444204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0719 04:30:49.253689  444204 ssh_runner.go:195] Run: crio --version
	I0719 04:30:49.291861  444204 ssh_runner.go:195] Run: crio --version
	I0719 04:30:49.338554  444204 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0719 04:30:49.340467  444204 cli_runner.go:164] Run: docker network inspect addons-014077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 04:30:49.355844  444204 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0719 04:30:49.359485  444204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:30:49.370977  444204 kubeadm.go:883] updating cluster {Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:30:49.371106  444204 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:49.371166  444204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:30:49.447838  444204 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:30:49.447862  444204 crio.go:433] Images already preloaded, skipping extraction
	I0719 04:30:49.447919  444204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:30:49.483791  444204 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:30:49.483814  444204 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:30:49.483822  444204 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0719 04:30:49.483931  444204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-014077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:30:49.484013  444204 ssh_runner.go:195] Run: crio config
	I0719 04:30:49.537701  444204 cni.go:84] Creating CNI manager for ""
	I0719 04:30:49.537726  444204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:30:49.537736  444204 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:30:49.537759  444204 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-014077 NodeName:addons-014077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:30:49.537907  444204 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-014077"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:30:49.537979  444204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:30:49.547309  444204 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:30:49.547438  444204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 04:30:49.556265  444204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0719 04:30:49.574546  444204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:30:49.593388  444204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0719 04:30:49.612012  444204 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0719 04:30:49.615486  444204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:30:49.626545  444204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:30:49.708718  444204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:30:49.722745  444204 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077 for IP: 192.168.49.2
	I0719 04:30:49.722771  444204 certs.go:194] generating shared ca certs ...
	I0719 04:30:49.722787  444204 certs.go:226] acquiring lock for ca certs: {Name:mka5df50fae162dd91003b3c847084951b043e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:49.722920  444204 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key
	I0719 04:30:50.294156  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt ...
	I0719 04:30:50.294192  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt: {Name:mk8ac4967e1da44eed49d1fa6eec2d763c8c81b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.294393  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key ...
	I0719 04:30:50.294408  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key: {Name:mk5d1e1346fbfb309ccf6d4beebe0758d3d62000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.294527  444204 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key
	I0719 04:30:50.539424  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.crt ...
	I0719 04:30:50.539456  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.crt: {Name:mk1acb8d6a21cc45d0e0a6fc2023765575aabb27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.539669  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key ...
	I0719 04:30:50.539685  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key: {Name:mka21f1117bec7af2998ac10a45eb4cd14bd52b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.539780  444204 certs.go:256] generating profile certs ...
	I0719 04:30:50.539852  444204 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.key
	I0719 04:30:50.539872  444204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt with IP's: []
	I0719 04:30:50.978372  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt ...
	I0719 04:30:50.978406  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: {Name:mk8cebfc96bc64b731889b761fee19626bad3c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.978663  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.key ...
	I0719 04:30:50.978680  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.key: {Name:mk2b85bdf8ae2d2f7b417bc8f7d652d47cc7966d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.978807  444204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe
	I0719 04:30:50.978831  444204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0719 04:30:51.424941  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe ...
	I0719 04:30:51.424972  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe: {Name:mk946ec8e4faa816afdeb1a978f1189b66bbb20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:51.425165  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe ...
	I0719 04:30:51.425181  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe: {Name:mk25c360448449a4cc30b83e4d5ab6a0542b472b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:51.425275  444204 certs.go:381] copying /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe -> /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt
	I0719 04:30:51.425356  444204 certs.go:385] copying /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe -> /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key
	I0719 04:30:51.425410  444204 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key
	I0719 04:30:51.425430  444204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt with IP's: []
	I0719 04:30:52.030496  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt ...
	I0719 04:30:52.030531  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt: {Name:mka9b3edf7bea73488c77469cac7fd32772f8a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:52.030719  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key ...
	I0719 04:30:52.030738  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key: {Name:mk955c77244fa45e611df07f07790a16c6a3d13a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:52.030927  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:30:52.030974  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem (1078 bytes)
	I0719 04:30:52.031011  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:30:52.031050  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/key.pem (1675 bytes)
	I0719 04:30:52.031702  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:30:52.057889  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 04:30:52.086532  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:30:52.115393  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:30:52.140100  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 04:30:52.165051  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:30:52.189785  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:30:52.213627  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:30:52.239275  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:30:52.265398  444204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:30:52.285005  444204 ssh_runner.go:195] Run: openssl version
	I0719 04:30:52.291045  444204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:30:52.301428  444204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:30:52.305383  444204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 04:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:30:52.305453  444204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:30:52.312694  444204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:30:52.322940  444204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:30:52.327194  444204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:30:52.327282  444204 kubeadm.go:392] StartCluster: {Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:30:52.327390  444204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:30:52.327476  444204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:30:52.367305  444204 cri.go:89] found id: ""
	I0719 04:30:52.367411  444204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 04:30:52.376760  444204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 04:30:52.385747  444204 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0719 04:30:52.385842  444204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 04:30:52.394746  444204 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 04:30:52.394767  444204 kubeadm.go:157] found existing configuration files:
	
	I0719 04:30:52.394854  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 04:30:52.403763  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 04:30:52.403856  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 04:30:52.412360  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 04:30:52.420823  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 04:30:52.420913  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 04:30:52.429492  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 04:30:52.438422  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 04:30:52.438594  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 04:30:52.447001  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 04:30:52.455706  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 04:30:52.455789  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 04:30:52.464232  444204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0719 04:30:52.509359  444204 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 04:30:52.509666  444204 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 04:30:52.555708  444204 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0719 04:30:52.555780  444204 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0719 04:30:52.555821  444204 kubeadm.go:310] OS: Linux
	I0719 04:30:52.555870  444204 kubeadm.go:310] CGROUPS_CPU: enabled
	I0719 04:30:52.555921  444204 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0719 04:30:52.555972  444204 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0719 04:30:52.556022  444204 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0719 04:30:52.556073  444204 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0719 04:30:52.556124  444204 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0719 04:30:52.556178  444204 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0719 04:30:52.556232  444204 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0719 04:30:52.556281  444204 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0719 04:30:52.623047  444204 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 04:30:52.623157  444204 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 04:30:52.623252  444204 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 04:30:52.884007  444204 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 04:30:52.887181  444204 out.go:204]   - Generating certificates and keys ...
	I0719 04:30:52.887360  444204 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 04:30:52.887473  444204 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 04:30:53.125184  444204 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 04:30:53.471778  444204 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 04:30:54.069542  444204 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 04:30:54.374981  444204 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 04:30:54.573548  444204 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 04:30:54.573846  444204 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-014077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0719 04:30:54.756171  444204 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 04:30:54.756373  444204 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-014077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0719 04:30:55.470784  444204 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 04:30:55.840287  444204 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 04:30:56.107252  444204 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 04:30:56.107533  444204 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 04:30:56.609068  444204 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 04:30:56.785031  444204 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 04:30:56.982557  444204 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 04:30:57.204466  444204 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 04:30:57.515904  444204 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 04:30:57.517440  444204 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 04:30:57.521524  444204 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 04:30:57.523763  444204 out.go:204]   - Booting up control plane ...
	I0719 04:30:57.523867  444204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 04:30:57.523944  444204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 04:30:57.524839  444204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 04:30:57.534855  444204 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 04:30:57.535939  444204 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 04:30:57.535989  444204 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 04:30:57.629075  444204 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 04:30:57.629162  444204 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 04:30:58.631770  444204 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002793621s
	I0719 04:30:58.631858  444204 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 04:31:05.136288  444204 kubeadm.go:310] [api-check] The API server is healthy after 6.502078961s
	I0719 04:31:05.157934  444204 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 04:31:05.179119  444204 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 04:31:05.206555  444204 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 04:31:05.206747  444204 kubeadm.go:310] [mark-control-plane] Marking the node addons-014077 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 04:31:05.218310  444204 kubeadm.go:310] [bootstrap-token] Using token: wnhp1p.6qfiwt67coucume8
	I0719 04:31:05.220299  444204 out.go:204]   - Configuring RBAC rules ...
	I0719 04:31:05.220438  444204 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 04:31:05.229076  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 04:31:05.239703  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 04:31:05.244861  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 04:31:05.249663  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 04:31:05.255274  444204 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 04:31:05.544067  444204 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 04:31:05.972060  444204 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 04:31:06.544442  444204 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 04:31:06.545794  444204 kubeadm.go:310] 
	I0719 04:31:06.545874  444204 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 04:31:06.545881  444204 kubeadm.go:310] 
	I0719 04:31:06.545956  444204 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 04:31:06.545967  444204 kubeadm.go:310] 
	I0719 04:31:06.546008  444204 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 04:31:06.546090  444204 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 04:31:06.546145  444204 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 04:31:06.546156  444204 kubeadm.go:310] 
	I0719 04:31:06.546209  444204 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 04:31:06.546218  444204 kubeadm.go:310] 
	I0719 04:31:06.546263  444204 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 04:31:06.546271  444204 kubeadm.go:310] 
	I0719 04:31:06.546321  444204 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 04:31:06.546396  444204 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 04:31:06.546478  444204 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 04:31:06.546488  444204 kubeadm.go:310] 
	I0719 04:31:06.546569  444204 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 04:31:06.546645  444204 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 04:31:06.546654  444204 kubeadm.go:310] 
	I0719 04:31:06.546734  444204 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wnhp1p.6qfiwt67coucume8 \
	I0719 04:31:06.546836  444204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:adc12064acfd4256056a937f24df92377801baea1f8829f0d6ba89254df1b00b \
	I0719 04:31:06.546860  444204 kubeadm.go:310] 	--control-plane 
	I0719 04:31:06.546868  444204 kubeadm.go:310] 
	I0719 04:31:06.546949  444204 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 04:31:06.546957  444204 kubeadm.go:310] 
	I0719 04:31:06.547036  444204 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wnhp1p.6qfiwt67coucume8 \
	I0719 04:31:06.547136  444204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:adc12064acfd4256056a937f24df92377801baea1f8829f0d6ba89254df1b00b 
	I0719 04:31:06.550389  444204 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0719 04:31:06.550523  444204 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 04:31:06.550547  444204 cni.go:84] Creating CNI manager for ""
	I0719 04:31:06.550556  444204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:31:06.552748  444204 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 04:31:06.554885  444204 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 04:31:06.558858  444204 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 04:31:06.558881  444204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 04:31:06.578965  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 04:31:06.841676  444204 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 04:31:06.841816  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:06.841898  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-014077 minikube.k8s.io/updated_at=2024_07_19T04_31_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=addons-014077 minikube.k8s.io/primary=true
	I0719 04:31:06.983121  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:06.983235  444204 ops.go:34] apiserver oom_adj: -16
	I0719 04:31:07.483652  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:07.983278  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:08.483306  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:08.983220  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:09.483673  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:09.983303  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:10.484100  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:10.983375  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:11.483412  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:11.984135  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:12.483794  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:12.983280  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:13.483871  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:13.983646  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:14.483269  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:14.983252  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:15.483841  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:15.984010  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:16.484237  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:16.983606  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:17.484007  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:17.983257  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:18.483936  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:18.983255  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:19.484016  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:19.608269  444204 kubeadm.go:1113] duration metric: took 12.766502447s to wait for elevateKubeSystemPrivileges
	I0719 04:31:19.608305  444204 kubeadm.go:394] duration metric: took 27.281027081s to StartCluster
	I0719 04:31:19.608323  444204 settings.go:142] acquiring lock: {Name:mkd73071bbdd6758849d0c7992cd9bb0e7ebcdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:19.608429  444204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:31:19.609288  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/kubeconfig: {Name:mk1a12c3f020bf8e8853640f940fd53850952b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:19.609852  444204 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:31:19.610571  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 04:31:19.610880  444204 config.go:182] Loaded profile config "addons-014077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:19.610925  444204 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 04:31:19.611310  444204 addons.go:69] Setting yakd=true in profile "addons-014077"
	I0719 04:31:19.611343  444204 addons.go:234] Setting addon yakd=true in "addons-014077"
	I0719 04:31:19.611560  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.612262  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.612695  444204 out.go:177] * Verifying Kubernetes components...
	I0719 04:31:19.613407  444204 addons.go:69] Setting metrics-server=true in profile "addons-014077"
	I0719 04:31:19.613461  444204 addons.go:234] Setting addon metrics-server=true in "addons-014077"
	I0719 04:31:19.613496  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.613507  444204 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-014077"
	I0719 04:31:19.613534  444204 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-014077"
	I0719 04:31:19.613568  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.614047  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.614143  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.615321  444204 addons.go:69] Setting registry=true in profile "addons-014077"
	I0719 04:31:19.615365  444204 addons.go:234] Setting addon registry=true in "addons-014077"
	I0719 04:31:19.615393  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.615849  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.634539  444204 addons.go:69] Setting cloud-spanner=true in profile "addons-014077"
	I0719 04:31:19.638116  444204 addons.go:234] Setting addon cloud-spanner=true in "addons-014077"
	I0719 04:31:19.638189  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.638729  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.636639  444204 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-014077"
	I0719 04:31:19.643098  444204 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-014077"
	I0719 04:31:19.643264  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.636652  444204 addons.go:69] Setting default-storageclass=true in profile "addons-014077"
	I0719 04:31:19.636657  444204 addons.go:69] Setting gcp-auth=true in profile "addons-014077"
	I0719 04:31:19.636660  444204 addons.go:69] Setting ingress=true in profile "addons-014077"
	I0719 04:31:19.636664  444204 addons.go:69] Setting ingress-dns=true in profile "addons-014077"
	I0719 04:31:19.636677  444204 addons.go:69] Setting inspektor-gadget=true in profile "addons-014077"
	I0719 04:31:19.636748  444204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:19.636944  444204 addons.go:69] Setting storage-provisioner=true in profile "addons-014077"
	I0719 04:31:19.636953  444204 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-014077"
	I0719 04:31:19.636961  444204 addons.go:69] Setting volcano=true in profile "addons-014077"
	I0719 04:31:19.636969  444204 addons.go:69] Setting volumesnapshots=true in profile "addons-014077"
	I0719 04:31:19.659570  444204 addons.go:234] Setting addon volumesnapshots=true in "addons-014077"
	I0719 04:31:19.659633  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.660114  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.660734  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.680819  444204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-014077"
	I0719 04:31:19.681173  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.688529  444204 addons.go:234] Setting addon storage-provisioner=true in "addons-014077"
	I0719 04:31:19.688628  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.689192  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.707056  444204 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-014077"
	I0719 04:31:19.708213  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.717000  444204 mustload.go:65] Loading cluster: addons-014077
	I0719 04:31:19.717260  444204 config.go:182] Loaded profile config "addons-014077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:19.717551  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.739026  444204 addons.go:234] Setting addon volcano=true in "addons-014077"
	I0719 04:31:19.739156  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.739617  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.739949  444204 addons.go:234] Setting addon ingress=true in "addons-014077"
	I0719 04:31:19.740034  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.740466  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.771390  444204 addons.go:234] Setting addon ingress-dns=true in "addons-014077"
	I0719 04:31:19.771503  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.771979  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.802393  444204 addons.go:234] Setting addon inspektor-gadget=true in "addons-014077"
	I0719 04:31:19.802517  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.802998  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.842096  444204 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 04:31:19.850949  444204 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 04:31:19.851017  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 04:31:19.851119  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.858505  444204 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 04:31:19.860552  444204 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 04:31:19.861203  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 04:31:19.861241  444204 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 04:31:19.861425  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.862817  444204 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 04:31:19.863068  444204 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 04:31:19.863082  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 04:31:19.863144  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.865914  444204 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 04:31:19.869389  444204 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-014077"
	I0719 04:31:19.869441  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.869853  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.971092  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 04:31:19.971133  444204 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 04:31:19.971216  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.971452  444204 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 04:31:19.973473  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 04:31:19.975499  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 04:31:19.976800  444204 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 04:31:19.976829  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 04:31:19.976900  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	W0719 04:31:19.986792  444204 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0719 04:31:19.987066  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:19.988282  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 04:31:19.989514  444204 addons.go:234] Setting addon default-storageclass=true in "addons-014077"
	I0719 04:31:19.989547  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.989959  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.990142  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:19.991041  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.993262  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 04:31:19.995896  444204 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 04:31:19.996035  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.995833  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 04:31:20.022126  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 04:31:20.026580  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 04:31:19.995844  444204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 04:31:20.030376  444204 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:31:20.030400  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 04:31:20.034636  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.034844  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 04:31:20.036961  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 04:31:20.042596  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 04:31:20.046562  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 04:31:20.046602  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 04:31:20.046691  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.062518  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 04:31:20.064855  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 04:31:20.066883  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 04:31:20.071027  444204 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 04:31:20.071053  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 04:31:20.071211  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.088740  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.090936  444204 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 04:31:20.091068  444204 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 04:31:20.093974  444204 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 04:31:20.094000  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 04:31:20.094079  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.094310  444204 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 04:31:20.094322  444204 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 04:31:20.094379  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.123060  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.139082  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.185413  444204 out.go:177]   - Using image docker.io/busybox:stable
	I0719 04:31:20.190916  444204 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 04:31:20.192772  444204 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 04:31:20.192797  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 04:31:20.192868  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.246206  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 04:31:20.274968  444204 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 04:31:20.274988  444204 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 04:31:20.275052  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.283424  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.289876  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.303429  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.303811  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.304807  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.331034  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.331040  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.368107  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.482718  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 04:31:20.518035  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 04:31:20.518061  444204 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 04:31:20.568342  444204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:31:20.573901  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 04:31:20.573932  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 04:31:20.632722  444204 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 04:31:20.632748  444204 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 04:31:20.666288  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 04:31:20.706807  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 04:31:20.706853  444204 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 04:31:20.766299  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 04:31:20.766327  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 04:31:20.769351  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 04:31:20.769375  444204 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 04:31:20.772615  444204 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 04:31:20.772638  444204 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 04:31:20.778162  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 04:31:20.832130  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 04:31:20.834712  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 04:31:20.841755  444204 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 04:31:20.841783  444204 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 04:31:20.846053  444204 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 04:31:20.846081  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 04:31:20.849270  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 04:31:20.874411  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:31:20.880707  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 04:31:20.880747  444204 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 04:31:20.929227  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 04:31:20.929253  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 04:31:20.959969  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 04:31:20.959996  444204 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 04:31:20.962023  444204 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 04:31:20.962048  444204 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 04:31:20.991160  444204 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 04:31:20.991201  444204 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 04:31:21.031050  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 04:31:21.074125  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 04:31:21.074157  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 04:31:21.137566  444204 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 04:31:21.137610  444204 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 04:31:21.147136  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 04:31:21.155915  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 04:31:21.155959  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 04:31:21.186084  444204 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 04:31:21.186123  444204 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 04:31:21.260889  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 04:31:21.310179  444204 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 04:31:21.310205  444204 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 04:31:21.352552  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 04:31:21.352592  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 04:31:21.358560  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 04:31:21.358586  444204 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 04:31:21.465121  444204 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 04:31:21.465161  444204 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 04:31:21.470924  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 04:31:21.470950  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 04:31:21.495128  444204 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 04:31:21.495160  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 04:31:21.559260  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 04:31:21.580539  444204 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 04:31:21.580566  444204 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 04:31:21.606821  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 04:31:21.606852  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 04:31:21.661159  444204 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 04:31:21.661184  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 04:31:21.686134  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 04:31:21.686177  444204 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 04:31:21.696234  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 04:31:21.733506  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 04:31:21.733532  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 04:31:21.847299  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 04:31:21.847333  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 04:31:22.013287  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 04:31:22.013319  444204 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 04:31:22.177555  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 04:31:23.066419  444204 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.820173963s)
	I0719 04:31:23.066464  444204 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0719 04:31:23.067034  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.584286042s)
	I0719 04:31:23.067183  444204 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.498817727s)
	I0719 04:31:23.068782  444204 node_ready.go:35] waiting up to 6m0s for node "addons-014077" to be "Ready" ...
	I0719 04:31:23.738094  444204 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-014077" context rescaled to 1 replicas
	I0719 04:31:24.124244  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.45791764s)
	I0719 04:31:24.193565  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.415366394s)
	I0719 04:31:24.678470  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.846302782s)
	I0719 04:31:25.080363  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:25.673659  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.838900366s)
	I0719 04:31:25.673702  444204 addons.go:475] Verifying addon ingress=true in "addons-014077"
	I0719 04:31:25.674074  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.642991932s)
	I0719 04:31:25.674115  444204 addons.go:475] Verifying addon registry=true in "addons-014077"
	I0719 04:31:25.673858  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.824466928s)
	I0719 04:31:25.673899  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.79946494s)
	I0719 04:31:25.674480  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.52726484s)
	I0719 04:31:25.674498  444204 addons.go:475] Verifying addon metrics-server=true in "addons-014077"
	I0719 04:31:25.674542  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.413620715s)
	I0719 04:31:25.676987  444204 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-014077 service yakd-dashboard -n yakd-dashboard
	
	I0719 04:31:25.677072  444204 out.go:177] * Verifying ingress addon...
	I0719 04:31:25.677173  444204 out.go:177] * Verifying registry addon...
	I0719 04:31:25.680746  444204 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 04:31:25.681002  444204 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 04:31:25.701097  444204 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 04:31:25.701125  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:25.702111  444204 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 04:31:25.702131  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:25.749746  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.053464222s)
	I0719 04:31:25.749956  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.190660762s)
	W0719 04:31:25.749985  444204 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 04:31:25.750032  444204 retry.go:31] will retry after 134.473484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 04:31:25.884951  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 04:31:26.160613  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.983009465s)
	I0719 04:31:26.160660  444204 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-014077"
	I0719 04:31:26.163073  444204 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 04:31:26.166160  444204 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 04:31:26.199943  444204 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 04:31:26.199970  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:26.211012  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:26.224596  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:26.670730  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:26.685965  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:26.686339  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:27.176659  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:27.189652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:27.190611  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:27.572297  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:27.671255  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:27.686614  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:27.688627  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:28.173319  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:28.188339  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:28.189992  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:28.628653  444204 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 04:31:28.628790  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:28.673398  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:28.691096  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:28.694550  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:28.695546  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:28.921175  444204 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 04:31:29.000074  444204 addons.go:234] Setting addon gcp-auth=true in "addons-014077"
	I0719 04:31:29.000129  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:29.000572  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:29.004406  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.119389799s)
	I0719 04:31:29.031863  444204 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 04:31:29.031920  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:29.053955  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:29.163868  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 04:31:29.165491  444204 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 04:31:29.167050  444204 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 04:31:29.167073  444204 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 04:31:29.172034  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:29.189118  444204 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 04:31:29.189147  444204 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 04:31:29.192397  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:29.193720  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:29.213516  444204 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 04:31:29.213585  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 04:31:29.234009  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 04:31:29.572496  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:29.673132  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:29.690127  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:29.692343  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:29.966037  444204 addons.go:475] Verifying addon gcp-auth=true in "addons-014077"
	I0719 04:31:29.968164  444204 out.go:177] * Verifying gcp-auth addon...
	I0719 04:31:29.971243  444204 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 04:31:29.981311  444204 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 04:31:29.981378  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:30.172106  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:30.187836  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:30.189147  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:30.475285  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:30.671186  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:30.686234  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:30.686367  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:30.974959  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:31.171067  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:31.186500  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:31.186718  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:31.477477  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:31.572808  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:31.671416  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:31.694835  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:31.696610  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:31.976082  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:32.172030  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:32.188604  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:32.190732  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:32.476049  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:32.671240  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:32.685553  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:32.686601  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:32.975315  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:33.172642  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:33.186557  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:33.187871  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:33.475198  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:33.670383  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:33.685676  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:33.686175  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:33.975413  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:34.072323  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:34.170653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:34.186158  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:34.186633  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:34.474687  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:34.671148  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:34.685688  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:34.686641  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:34.975210  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:35.171158  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:35.185858  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:35.187051  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:35.475285  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:35.671349  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:35.687987  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:35.689404  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:35.975490  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:36.073345  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:36.171011  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:36.185441  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:36.185973  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:36.475258  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:36.671273  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:36.685685  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:36.686334  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:36.974755  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:37.171058  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:37.185418  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:37.186186  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:37.475601  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:37.671085  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:37.685523  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:37.686759  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:37.974814  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:38.170713  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:38.185459  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:38.187051  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:38.475349  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:38.572402  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:38.671475  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:38.685768  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:38.686712  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:38.975135  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:39.172662  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:39.186096  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:39.186603  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:39.475227  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:39.671732  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:39.686152  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:39.686462  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:39.974833  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:40.171074  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:40.186033  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:40.186701  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:40.474600  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:40.670822  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:40.685402  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:40.687345  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:40.974652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:41.073102  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:41.170595  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:41.185017  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:41.187038  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:41.475380  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:41.670837  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:41.685769  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:41.686354  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:41.975375  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:42.171463  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:42.186986  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:42.188134  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:42.475060  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:42.670990  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:42.685680  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:42.686307  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:42.974616  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:43.172153  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:43.186216  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:43.187016  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:43.474476  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:43.572086  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:43.671023  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:43.685700  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:43.686363  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:43.975089  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:44.171051  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:44.186392  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:44.186995  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:44.474279  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:44.670452  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:44.685439  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:44.686018  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:44.974997  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:45.171876  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:45.186309  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:45.186482  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:45.475487  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:45.572167  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:45.670931  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:45.685988  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:45.686967  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:45.974511  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:46.170929  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:46.185834  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:46.186260  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:46.475391  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:46.670830  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:46.686061  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:46.686965  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:46.974395  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:47.170898  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:47.185884  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:47.186189  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:47.474373  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:47.575206  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:47.670342  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:47.684897  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:47.685714  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:47.975519  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:48.171084  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:48.185357  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:48.186046  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:48.474252  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:48.669834  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:48.689346  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:48.690682  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:48.974947  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:49.170733  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:49.184815  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:49.185711  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:49.474758  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:49.670219  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:49.687441  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:49.687948  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:49.975523  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:50.072351  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:50.171203  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:50.186321  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:50.186688  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:50.475356  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:50.670747  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:50.685613  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:50.686507  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:50.975744  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:51.170961  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:51.185672  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:51.187085  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:51.474699  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:51.670752  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:51.685216  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:51.686133  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:51.975016  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:52.073140  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:52.170732  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:52.185831  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:52.187321  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:52.474501  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:52.670916  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:52.685761  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:52.687130  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:52.974582  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:53.172596  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:53.185854  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:53.187059  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:53.475999  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:53.671022  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:53.686914  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:53.688722  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:53.975331  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:54.170701  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:54.184626  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:54.186575  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:54.475103  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:54.572223  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:54.670130  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:54.686363  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:54.686664  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:54.976853  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:55.170359  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:55.189113  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:55.195369  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:55.474680  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:55.670604  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:55.686300  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:55.687309  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:55.975174  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:56.170576  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:56.186881  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:56.187176  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:56.474797  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:56.671203  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:56.685006  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:56.685884  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:56.975006  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:57.072382  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:57.170855  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:57.184869  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:57.186026  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:57.475595  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:57.671163  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:57.685766  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:57.685981  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:57.974333  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:58.170328  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:58.187133  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:58.188423  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:58.474517  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:58.670846  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:58.684881  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:58.685617  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:58.975367  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:59.171271  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:59.185725  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:59.187037  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:59.474839  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:59.572272  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:59.670499  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:59.687101  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:59.687564  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:59.975302  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:00.181087  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:00.203533  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:00.209010  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:00.475904  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:00.670465  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:00.685728  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:00.686666  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:00.975381  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:01.170740  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:01.186071  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:01.186360  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:01.474469  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:01.572380  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:32:01.670712  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:01.685351  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:01.686108  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:01.975406  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:02.170653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:02.185705  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:02.186053  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:02.474679  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:02.670564  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:02.685696  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:02.686589  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:02.975931  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:03.175593  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:03.186830  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:03.186989  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:03.486653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:03.664331  444204 node_ready.go:49] node "addons-014077" has status "Ready":"True"
	I0719 04:32:03.664353  444204 node_ready.go:38] duration metric: took 40.595537445s for node "addons-014077" to be "Ready" ...
	I0719 04:32:03.664365  444204 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:32:03.700353  444204 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 04:32:03.700380  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:03.705051  444204 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p5jz6" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:03.717069  444204 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 04:32:03.717093  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:03.718338  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:04.007592  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:04.219056  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:04.219742  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:04.220637  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:04.475180  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:04.672108  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:04.685976  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:04.688454  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:04.711572  444204 pod_ready.go:92] pod "coredns-7db6d8ff4d-p5jz6" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.711598  444204 pod_ready.go:81] duration metric: took 1.006515944s for pod "coredns-7db6d8ff4d-p5jz6" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.711646  444204 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.717108  444204 pod_ready.go:92] pod "etcd-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.717185  444204 pod_ready.go:81] duration metric: took 5.522554ms for pod "etcd-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.717206  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.722851  444204 pod_ready.go:92] pod "kube-apiserver-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.722879  444204 pod_ready.go:81] duration metric: took 5.663047ms for pod "kube-apiserver-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.722892  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.728055  444204 pod_ready.go:92] pod "kube-controller-manager-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.728133  444204 pod_ready.go:81] duration metric: took 5.207058ms for pod "kube-controller-manager-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.728155  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqgw8" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.773861  444204 pod_ready.go:92] pod "kube-proxy-hqgw8" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.773886  444204 pod_ready.go:81] duration metric: took 45.722023ms for pod "kube-proxy-hqgw8" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.773898  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.975208  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:05.171958  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:05.175925  444204 pod_ready.go:92] pod "kube-scheduler-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:05.176004  444204 pod_ready.go:81] duration metric: took 402.096478ms for pod "kube-scheduler-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:05.176039  444204 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:05.196083  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:05.201273  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:05.475766  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:05.673231  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:05.688906  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:05.690564  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:05.978331  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:06.172188  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:06.189301  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:06.190633  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:06.474900  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:06.672183  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:06.697273  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:06.698638  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:06.974816  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:07.181393  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:07.196306  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:07.197943  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:07.198687  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:07.477110  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:07.672535  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:07.689409  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:07.695601  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:07.975168  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:08.172417  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:08.193838  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:08.194498  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:08.475786  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:08.673812  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:08.690584  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:08.691525  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:08.976018  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:09.173153  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:09.201948  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:09.203345  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:09.206461  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:09.474982  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:09.689686  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:09.706348  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:09.731056  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:09.975645  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:10.178040  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:10.188768  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:10.190277  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:10.476072  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:10.674059  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:10.688334  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:10.689230  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:10.975586  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:11.171629  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:11.186673  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:11.188677  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:11.493388  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:11.672393  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:11.690332  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:11.697521  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:11.706562  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:11.975453  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:12.172969  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:12.194331  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:12.197964  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:12.475339  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:12.671645  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:12.687356  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:12.687849  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:12.975480  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:13.172324  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:13.187897  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:13.189967  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:13.474968  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:13.671970  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:13.686377  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:13.687296  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:13.975121  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:14.172069  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:14.182836  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:14.186318  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:14.188053  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:14.475632  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:14.673727  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:14.715264  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:14.716131  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:14.975382  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:15.173731  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:15.187276  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:15.189177  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:15.475525  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:15.673009  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:15.686938  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:15.689635  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:15.975627  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:16.171547  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:16.187635  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:16.188064  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:16.475096  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:16.671622  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:16.682052  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:16.686556  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:16.686832  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:16.975271  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:17.172502  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:17.186559  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:17.186635  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:17.475636  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:17.683110  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:17.711713  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:17.717143  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:17.974786  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:18.174725  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:18.193186  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:18.195210  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:18.475225  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:18.680248  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:18.696799  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:18.701609  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:18.702614  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:18.975906  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:19.173461  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:19.186791  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:19.187680  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:19.474497  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:19.676562  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:19.686683  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:19.689770  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:19.975957  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:20.173332  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:20.186767  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:20.187977  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:20.478519  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:20.687199  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:20.691361  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:20.692154  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:20.975949  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:21.172029  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:21.189432  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:21.192966  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:21.193533  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:21.475304  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:21.672201  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:21.686170  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:21.687748  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:21.974902  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:22.172522  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:22.187796  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:22.188401  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:22.475222  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:22.672808  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:22.686156  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:22.687876  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:22.977249  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:23.173806  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:23.200728  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:23.201970  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:23.202710  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:23.475891  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:23.673135  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:23.695801  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:23.697879  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:23.976839  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:24.175299  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:24.190344  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:24.191951  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:24.476591  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:24.673257  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:24.691674  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:24.692210  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:24.984959  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:25.172486  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:25.188584  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:25.189485  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:25.475268  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:25.672867  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:25.695574  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:25.705613  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:25.706702  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:25.975461  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:26.178461  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:26.191494  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:26.191964  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:26.475568  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:26.672178  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:26.691298  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:26.692085  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:26.974489  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:27.173533  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:27.188758  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:27.190633  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:27.483893  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:27.672582  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:27.694627  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:27.696916  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:27.975637  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:28.172532  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:28.187782  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:28.193526  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:28.194401  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:28.474893  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:28.671836  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:28.686228  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:28.687441  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:28.975034  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:29.172379  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:29.187783  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:29.188434  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:29.475992  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:29.674028  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:29.689996  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:29.690923  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:29.974973  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:30.171708  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:30.198904  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:30.199581  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:30.201608  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:30.475336  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:30.678243  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:30.697781  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:30.699109  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:30.976850  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:31.173745  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:31.187309  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:31.188981  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:31.474661  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:31.674330  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:31.693066  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:31.694414  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:31.975025  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:32.172735  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:32.194038  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:32.195310  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:32.474738  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:32.671564  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:32.682539  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:32.687793  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:32.688903  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:32.975148  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:33.171612  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:33.192111  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:33.192653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:33.475390  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:33.682961  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:33.690560  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:33.692170  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:33.975488  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:34.171965  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:34.186841  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:34.187192  444204 kapi.go:107] duration metric: took 1m8.506188753s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 04:32:34.475344  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:34.672392  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:34.684858  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:34.975454  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:35.172611  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:35.184865  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:35.185351  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:35.480249  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:35.672251  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:35.686128  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:35.974994  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:36.171857  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:36.185825  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:36.477275  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:36.673573  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:36.719326  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:36.974553  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:37.172763  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:37.197380  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:37.209004  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:37.475687  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:37.672696  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:37.686095  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:37.977652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:38.175554  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:38.186037  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:38.474524  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:38.672146  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:38.685760  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:38.975780  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:39.173165  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:39.186201  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:39.476054  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:39.672378  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:39.690106  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:39.697254  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:39.975369  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:40.172887  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:40.195822  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:40.475124  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:40.675444  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:40.691419  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:40.975667  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:41.171853  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:41.186031  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:41.475509  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:41.671395  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:41.685217  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:41.974919  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:42.173638  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:42.184489  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:42.186502  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:42.475509  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:42.689381  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:42.704127  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:42.975628  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:43.173669  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:43.188607  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:43.475462  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:43.672289  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:43.687116  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:43.975316  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:44.173790  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:44.188860  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:44.190326  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:44.484275  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:44.673023  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:44.689136  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:44.975357  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:45.176867  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:45.187695  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:45.478006  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:45.674022  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:45.690978  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:45.975751  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:46.177066  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:46.194731  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:46.195313  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:46.519724  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:46.677612  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:46.706259  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:46.975807  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:47.184243  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:47.194708  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:47.476416  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:47.674545  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:47.701203  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:47.975051  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:48.181293  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:48.192510  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:48.196018  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:48.476405  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:48.672068  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:48.686716  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:48.976789  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:49.174019  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:49.203663  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:49.475822  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:49.673038  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:49.687935  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:49.976155  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:50.172375  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:50.187524  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:50.476122  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:50.672206  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:50.683106  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:50.686686  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:50.976128  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:51.173957  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:51.197114  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:51.476952  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:51.672324  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:51.686924  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:51.975283  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:52.171574  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:52.185509  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:52.475045  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:52.671496  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:52.685149  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:52.975419  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:53.171610  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:53.183170  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:53.186069  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:53.474652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:53.681260  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:53.685844  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:53.984580  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:54.172016  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:54.190313  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:54.474660  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:54.671737  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:54.686555  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:54.975586  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:55.172298  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:55.186775  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:55.189082  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:55.475603  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:55.687358  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:55.691876  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:55.975448  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:56.186694  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:56.213831  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:56.476919  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:56.672046  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:56.686532  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:56.982501  444204 kapi.go:107] duration metric: took 1m27.011255242s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 04:32:56.993883  444204 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-014077 cluster.
	I0719 04:32:57.006291  444204 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 04:32:57.018117  444204 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 04:32:57.172688  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:57.189510  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:57.675132  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:57.683892  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:57.686923  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:58.181132  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:58.186903  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:58.672020  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:58.702384  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:59.172204  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:59.187303  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:59.671572  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:59.685194  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:00.226954  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:00.258245  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:00.258410  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:00.675705  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:00.688306  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:01.173308  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:01.187508  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:01.672976  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:01.690557  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:02.172439  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:02.200876  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:02.671870  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:02.688347  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:02.689434  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:03.173701  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:03.187122  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:03.672165  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:03.688226  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:04.171545  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:04.185241  444204 kapi.go:107] duration metric: took 1m38.504496682s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 04:33:04.674045  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:05.173041  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:05.183095  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:05.673220  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:06.172257  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:06.671659  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:07.174602  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:07.183993  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:07.671416  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:08.171544  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:08.671240  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:09.172390  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:09.673382  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:09.684419  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:10.172594  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:10.671781  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:11.172955  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:11.672600  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:12.190154  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:12.193770  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:12.671545  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:13.172513  444204 kapi.go:107] duration metric: took 1m47.006352326s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 04:33:13.176375  444204 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, storage-provisioner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0719 04:33:13.177945  444204 addons.go:510] duration metric: took 1m53.567017393s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher storage-provisioner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0719 04:33:14.682250  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:17.182498  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:19.681980  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:21.682407  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:24.182980  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:26.682088  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:29.182494  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:31.682658  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:34.182197  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:36.182902  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:38.682277  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:40.684701  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:43.182681  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:45.184012  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:47.682643  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:49.682786  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:52.182420  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:54.182213  444204 pod_ready.go:92] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"True"
	I0719 04:33:54.182243  444204 pod_ready.go:81] duration metric: took 1m49.006167639s for pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace to be "Ready" ...
	I0719 04:33:54.182256  444204 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ms7rm" in "kube-system" namespace to be "Ready" ...
	I0719 04:33:54.188094  444204 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ms7rm" in "kube-system" namespace has status "Ready":"True"
	I0719 04:33:54.188118  444204 pod_ready.go:81] duration metric: took 5.85478ms for pod "nvidia-device-plugin-daemonset-ms7rm" in "kube-system" namespace to be "Ready" ...
	I0719 04:33:54.188139  444204 pod_ready.go:38] duration metric: took 1m50.523723488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:33:54.188155  444204 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:33:54.188660  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 04:33:54.188767  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 04:33:54.248031  444204 cri.go:89] found id: "fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:33:54.248057  444204 cri.go:89] found id: ""
	I0719 04:33:54.248065  444204 logs.go:276] 1 containers: [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a]
	I0719 04:33:54.248120  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.251986  444204 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 04:33:54.252058  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 04:33:54.290232  444204 cri.go:89] found id: "118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:33:54.290256  444204 cri.go:89] found id: ""
	I0719 04:33:54.290264  444204 logs.go:276] 1 containers: [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6]
	I0719 04:33:54.290329  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.294033  444204 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 04:33:54.294112  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 04:33:54.337967  444204 cri.go:89] found id: "47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:33:54.338039  444204 cri.go:89] found id: ""
	I0719 04:33:54.338063  444204 logs.go:276] 1 containers: [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563]
	I0719 04:33:54.338152  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.341838  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 04:33:54.341907  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 04:33:54.384611  444204 cri.go:89] found id: "23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:33:54.384634  444204 cri.go:89] found id: ""
	I0719 04:33:54.384643  444204 logs.go:276] 1 containers: [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1]
	I0719 04:33:54.384732  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.388040  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 04:33:54.388107  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 04:33:54.427893  444204 cri.go:89] found id: "e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:33:54.427916  444204 cri.go:89] found id: ""
	I0719 04:33:54.427924  444204 logs.go:276] 1 containers: [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57]
	I0719 04:33:54.427978  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.431234  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 04:33:54.431301  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 04:33:54.472804  444204 cri.go:89] found id: "6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:33:54.472837  444204 cri.go:89] found id: ""
	I0719 04:33:54.472847  444204 logs.go:276] 1 containers: [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c]
	I0719 04:33:54.472903  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.476173  444204 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 04:33:54.476236  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 04:33:54.511291  444204 cri.go:89] found id: "1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:33:54.511314  444204 cri.go:89] found id: ""
	I0719 04:33:54.511323  444204 logs.go:276] 1 containers: [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5]
	I0719 04:33:54.511376  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.515274  444204 logs.go:123] Gathering logs for describe nodes ...
	I0719 04:33:54.515315  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 04:33:54.685688  444204 logs.go:123] Gathering logs for kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] ...
	I0719 04:33:54.685719  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:33:54.732513  444204 logs.go:123] Gathering logs for kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] ...
	I0719 04:33:54.732545  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:33:54.769780  444204 logs.go:123] Gathering logs for kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] ...
	I0719 04:33:54.769810  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:33:54.820578  444204 logs.go:123] Gathering logs for container status ...
	I0719 04:33:54.820617  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 04:33:54.874252  444204 logs.go:123] Gathering logs for kubelet ...
	I0719 04:33:54.874285  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 04:33:54.919475  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:33:54.919725  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:33:54.975312  444204 logs.go:123] Gathering logs for kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] ...
	I0719 04:33:54.975349  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:33:55.049477  444204 logs.go:123] Gathering logs for etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] ...
	I0719 04:33:55.049524  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:33:55.102624  444204 logs.go:123] Gathering logs for coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] ...
	I0719 04:33:55.102659  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:33:55.153034  444204 logs.go:123] Gathering logs for kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] ...
	I0719 04:33:55.153068  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:33:55.222884  444204 logs.go:123] Gathering logs for CRI-O ...
	I0719 04:33:55.222920  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 04:33:55.323112  444204 logs.go:123] Gathering logs for dmesg ...
	I0719 04:33:55.323146  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 04:33:55.345360  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:33:55.345389  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 04:33:55.345482  444204 out.go:239] X Problems detected in kubelet:
	W0719 04:33:55.345516  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:33:55.345538  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:33:55.345564  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:33:55.345571  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:05.346596  444204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:34:05.361344  444204 api_server.go:72] duration metric: took 2m45.75145707s to wait for apiserver process to appear ...
	I0719 04:34:05.361371  444204 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:34:05.361409  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 04:34:05.361469  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 04:34:05.400030  444204 cri.go:89] found id: "fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:05.400050  444204 cri.go:89] found id: ""
	I0719 04:34:05.400058  444204 logs.go:276] 1 containers: [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a]
	I0719 04:34:05.400113  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.404052  444204 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 04:34:05.404127  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 04:34:05.443370  444204 cri.go:89] found id: "118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:05.443401  444204 cri.go:89] found id: ""
	I0719 04:34:05.443409  444204 logs.go:276] 1 containers: [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6]
	I0719 04:34:05.443469  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.447043  444204 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 04:34:05.447114  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 04:34:05.485326  444204 cri.go:89] found id: "47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:05.485347  444204 cri.go:89] found id: ""
	I0719 04:34:05.485354  444204 logs.go:276] 1 containers: [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563]
	I0719 04:34:05.485408  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.488785  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 04:34:05.488855  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 04:34:05.528071  444204 cri.go:89] found id: "23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:05.528098  444204 cri.go:89] found id: ""
	I0719 04:34:05.528106  444204 logs.go:276] 1 containers: [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1]
	I0719 04:34:05.528174  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.531868  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 04:34:05.531955  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 04:34:05.573356  444204 cri.go:89] found id: "e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:05.573382  444204 cri.go:89] found id: ""
	I0719 04:34:05.573401  444204 logs.go:276] 1 containers: [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57]
	I0719 04:34:05.573456  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.576846  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 04:34:05.576916  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 04:34:05.614907  444204 cri.go:89] found id: "6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:05.614931  444204 cri.go:89] found id: ""
	I0719 04:34:05.614939  444204 logs.go:276] 1 containers: [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c]
	I0719 04:34:05.614997  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.618461  444204 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 04:34:05.618535  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 04:34:05.663804  444204 cri.go:89] found id: "1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:05.663827  444204 cri.go:89] found id: ""
	I0719 04:34:05.663835  444204 logs.go:276] 1 containers: [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5]
	I0719 04:34:05.663890  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.667396  444204 logs.go:123] Gathering logs for etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] ...
	I0719 04:34:05.667422  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:05.710742  444204 logs.go:123] Gathering logs for coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] ...
	I0719 04:34:05.710774  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:05.758526  444204 logs.go:123] Gathering logs for kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] ...
	I0719 04:34:05.758558  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:05.835541  444204 logs.go:123] Gathering logs for kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] ...
	I0719 04:34:05.835587  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:05.891932  444204 logs.go:123] Gathering logs for container status ...
	I0719 04:34:05.892018  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 04:34:05.961379  444204 logs.go:123] Gathering logs for kubelet ...
	I0719 04:34:05.961410  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 04:34:06.001460  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:06.001679  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:06.056573  444204 logs.go:123] Gathering logs for dmesg ...
	I0719 04:34:06.056614  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 04:34:06.081171  444204 logs.go:123] Gathering logs for describe nodes ...
	I0719 04:34:06.081252  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 04:34:06.252976  444204 logs.go:123] Gathering logs for kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] ...
	I0719 04:34:06.253007  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:06.344247  444204 logs.go:123] Gathering logs for kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] ...
	I0719 04:34:06.344278  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:06.395847  444204 logs.go:123] Gathering logs for kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] ...
	I0719 04:34:06.395881  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:06.441739  444204 logs.go:123] Gathering logs for CRI-O ...
	I0719 04:34:06.441765  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 04:34:06.541913  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:06.541948  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 04:34:06.542015  444204 out.go:239] X Problems detected in kubelet:
	W0719 04:34:06.542025  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:06.542032  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:06.542040  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:06.542046  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:16.543965  444204 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0719 04:34:16.553400  444204 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0719 04:34:16.555363  444204 api_server.go:141] control plane version: v1.30.3
	I0719 04:34:16.555390  444204 api_server.go:131] duration metric: took 11.194010227s to wait for apiserver health ...
	I0719 04:34:16.555400  444204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:34:16.555422  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 04:34:16.555511  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 04:34:16.596016  444204 cri.go:89] found id: "fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:16.596040  444204 cri.go:89] found id: ""
	I0719 04:34:16.596048  444204 logs.go:276] 1 containers: [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a]
	I0719 04:34:16.596116  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.599885  444204 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 04:34:16.599964  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 04:34:16.643995  444204 cri.go:89] found id: "118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:16.644016  444204 cri.go:89] found id: ""
	I0719 04:34:16.644024  444204 logs.go:276] 1 containers: [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6]
	I0719 04:34:16.644081  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.647681  444204 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 04:34:16.647754  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 04:34:16.691988  444204 cri.go:89] found id: "47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:16.692021  444204 cri.go:89] found id: ""
	I0719 04:34:16.692030  444204 logs.go:276] 1 containers: [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563]
	I0719 04:34:16.692128  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.695927  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 04:34:16.696014  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 04:34:16.738811  444204 cri.go:89] found id: "23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:16.738834  444204 cri.go:89] found id: ""
	I0719 04:34:16.738842  444204 logs.go:276] 1 containers: [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1]
	I0719 04:34:16.738896  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.742478  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 04:34:16.742553  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 04:34:16.791542  444204 cri.go:89] found id: "e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:16.791566  444204 cri.go:89] found id: ""
	I0719 04:34:16.791574  444204 logs.go:276] 1 containers: [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57]
	I0719 04:34:16.791634  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.795216  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 04:34:16.795325  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 04:34:16.836873  444204 cri.go:89] found id: "6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:16.836898  444204 cri.go:89] found id: ""
	I0719 04:34:16.836906  444204 logs.go:276] 1 containers: [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c]
	I0719 04:34:16.836966  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.840535  444204 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 04:34:16.840621  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 04:34:16.879460  444204 cri.go:89] found id: "1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:16.879485  444204 cri.go:89] found id: ""
	I0719 04:34:16.879494  444204 logs.go:276] 1 containers: [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5]
	I0719 04:34:16.879566  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.883276  444204 logs.go:123] Gathering logs for kubelet ...
	I0719 04:34:16.883346  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 04:34:16.919898  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:16.920118  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:16.975358  444204 logs.go:123] Gathering logs for dmesg ...
	I0719 04:34:16.975392  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 04:34:16.999942  444204 logs.go:123] Gathering logs for coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] ...
	I0719 04:34:16.999971  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:17.053700  444204 logs.go:123] Gathering logs for kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] ...
	I0719 04:34:17.053777  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:17.100827  444204 logs.go:123] Gathering logs for kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] ...
	I0719 04:34:17.100855  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:17.189836  444204 logs.go:123] Gathering logs for kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] ...
	I0719 04:34:17.189875  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:17.242023  444204 logs.go:123] Gathering logs for describe nodes ...
	I0719 04:34:17.242058  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 04:34:17.387050  444204 logs.go:123] Gathering logs for kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] ...
	I0719 04:34:17.387082  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:17.460157  444204 logs.go:123] Gathering logs for etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] ...
	I0719 04:34:17.460193  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:17.504208  444204 logs.go:123] Gathering logs for kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] ...
	I0719 04:34:17.504242  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:17.550618  444204 logs.go:123] Gathering logs for CRI-O ...
	I0719 04:34:17.550654  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 04:34:17.654806  444204 logs.go:123] Gathering logs for container status ...
	I0719 04:34:17.654847  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 04:34:17.725438  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:17.725465  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 04:34:17.725525  444204 out.go:239] X Problems detected in kubelet:
	W0719 04:34:17.725536  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:17.725543  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:17.725555  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:17.725560  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:27.739616  444204 system_pods.go:59] 18 kube-system pods found
	I0719 04:34:27.739664  444204 system_pods.go:61] "coredns-7db6d8ff4d-p5jz6" [15f7a070-c1b9-4bdf-b3e6-0828da1ed028] Running
	I0719 04:34:27.739673  444204 system_pods.go:61] "csi-hostpath-attacher-0" [53d3a7de-f014-498f-88ce-c40414300150] Running
	I0719 04:34:27.739678  444204 system_pods.go:61] "csi-hostpath-resizer-0" [d1212fbe-5414-4bdd-a087-175a8236eac6] Running
	I0719 04:34:27.739682  444204 system_pods.go:61] "csi-hostpathplugin-5d84n" [750fc3d6-98b7-4797-a45a-d7849e59a8d8] Running
	I0719 04:34:27.739686  444204 system_pods.go:61] "etcd-addons-014077" [a27153bf-4d84-4f18-8fcd-4a9affe7e283] Running
	I0719 04:34:27.739696  444204 system_pods.go:61] "kindnet-dl4zb" [abdce223-a996-4835-a45e-976b8f051491] Running
	I0719 04:34:27.739717  444204 system_pods.go:61] "kube-apiserver-addons-014077" [0c35929d-2ec9-4ac3-aafe-fc4200ca09ae] Running
	I0719 04:34:27.739726  444204 system_pods.go:61] "kube-controller-manager-addons-014077" [80243f6d-0af4-4084-87fa-39acd70e093c] Running
	I0719 04:34:27.739730  444204 system_pods.go:61] "kube-ingress-dns-minikube" [788540eb-98b8-425c-bd6d-5c74eede8836] Running
	I0719 04:34:27.739734  444204 system_pods.go:61] "kube-proxy-hqgw8" [937c4d03-e6ea-4410-83c1-f3637a52e19d] Running
	I0719 04:34:27.739738  444204 system_pods.go:61] "kube-scheduler-addons-014077" [a60f26d2-5c8f-4b6d-9e80-3435f40ff60c] Running
	I0719 04:34:27.739744  444204 system_pods.go:61] "metrics-server-c59844bb4-6s6pb" [f1e51548-a1be-4356-a620-a46631404c83] Running
	I0719 04:34:27.739748  444204 system_pods.go:61] "nvidia-device-plugin-daemonset-ms7rm" [e10fa14c-5d6e-4792-ba1d-e37851cd7388] Running
	I0719 04:34:27.739755  444204 system_pods.go:61] "registry-656c9c8d9c-99psj" [267507bb-055e-4065-8138-ce3d5f7e0457] Running
	I0719 04:34:27.739759  444204 system_pods.go:61] "registry-proxy-b99sl" [4ee1a72b-b280-4382-82d9-43f79c251273] Running
	I0719 04:34:27.739768  444204 system_pods.go:61] "snapshot-controller-745499f584-5s7lv" [29a87517-28b9-4196-a5db-c8e88ea6fe02] Running
	I0719 04:34:27.739777  444204 system_pods.go:61] "snapshot-controller-745499f584-pfjv7" [2b852f66-d708-44a2-8284-2e07ed87e747] Running
	I0719 04:34:27.739781  444204 system_pods.go:61] "storage-provisioner" [62592988-dc48-43c1-9c20-802f2cb10103] Running
	I0719 04:34:27.739791  444204 system_pods.go:74] duration metric: took 11.184381133s to wait for pod list to return data ...
	I0719 04:34:27.739810  444204 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:34:27.742361  444204 default_sa.go:45] found service account: "default"
	I0719 04:34:27.742387  444204 default_sa.go:55] duration metric: took 2.571092ms for default service account to be created ...
	I0719 04:34:27.742397  444204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:34:27.753545  444204 system_pods.go:86] 18 kube-system pods found
	I0719 04:34:27.753583  444204 system_pods.go:89] "coredns-7db6d8ff4d-p5jz6" [15f7a070-c1b9-4bdf-b3e6-0828da1ed028] Running
	I0719 04:34:27.753592  444204 system_pods.go:89] "csi-hostpath-attacher-0" [53d3a7de-f014-498f-88ce-c40414300150] Running
	I0719 04:34:27.753597  444204 system_pods.go:89] "csi-hostpath-resizer-0" [d1212fbe-5414-4bdd-a087-175a8236eac6] Running
	I0719 04:34:27.753602  444204 system_pods.go:89] "csi-hostpathplugin-5d84n" [750fc3d6-98b7-4797-a45a-d7849e59a8d8] Running
	I0719 04:34:27.753608  444204 system_pods.go:89] "etcd-addons-014077" [a27153bf-4d84-4f18-8fcd-4a9affe7e283] Running
	I0719 04:34:27.753613  444204 system_pods.go:89] "kindnet-dl4zb" [abdce223-a996-4835-a45e-976b8f051491] Running
	I0719 04:34:27.753617  444204 system_pods.go:89] "kube-apiserver-addons-014077" [0c35929d-2ec9-4ac3-aafe-fc4200ca09ae] Running
	I0719 04:34:27.753621  444204 system_pods.go:89] "kube-controller-manager-addons-014077" [80243f6d-0af4-4084-87fa-39acd70e093c] Running
	I0719 04:34:27.753626  444204 system_pods.go:89] "kube-ingress-dns-minikube" [788540eb-98b8-425c-bd6d-5c74eede8836] Running
	I0719 04:34:27.753630  444204 system_pods.go:89] "kube-proxy-hqgw8" [937c4d03-e6ea-4410-83c1-f3637a52e19d] Running
	I0719 04:34:27.753634  444204 system_pods.go:89] "kube-scheduler-addons-014077" [a60f26d2-5c8f-4b6d-9e80-3435f40ff60c] Running
	I0719 04:34:27.753645  444204 system_pods.go:89] "metrics-server-c59844bb4-6s6pb" [f1e51548-a1be-4356-a620-a46631404c83] Running
	I0719 04:34:27.753649  444204 system_pods.go:89] "nvidia-device-plugin-daemonset-ms7rm" [e10fa14c-5d6e-4792-ba1d-e37851cd7388] Running
	I0719 04:34:27.753661  444204 system_pods.go:89] "registry-656c9c8d9c-99psj" [267507bb-055e-4065-8138-ce3d5f7e0457] Running
	I0719 04:34:27.753665  444204 system_pods.go:89] "registry-proxy-b99sl" [4ee1a72b-b280-4382-82d9-43f79c251273] Running
	I0719 04:34:27.753673  444204 system_pods.go:89] "snapshot-controller-745499f584-5s7lv" [29a87517-28b9-4196-a5db-c8e88ea6fe02] Running
	I0719 04:34:27.753677  444204 system_pods.go:89] "snapshot-controller-745499f584-pfjv7" [2b852f66-d708-44a2-8284-2e07ed87e747] Running
	I0719 04:34:27.753681  444204 system_pods.go:89] "storage-provisioner" [62592988-dc48-43c1-9c20-802f2cb10103] Running
	I0719 04:34:27.753689  444204 system_pods.go:126] duration metric: took 11.284998ms to wait for k8s-apps to be running ...
	I0719 04:34:27.753707  444204 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:34:27.753787  444204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:34:27.767156  444204 system_svc.go:56] duration metric: took 13.441102ms WaitForService to wait for kubelet
	I0719 04:34:27.767184  444204 kubeadm.go:582] duration metric: took 3m8.157302198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:34:27.767204  444204 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:34:27.770310  444204 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0719 04:34:27.770341  444204 node_conditions.go:123] node cpu capacity is 2
	I0719 04:34:27.770353  444204 node_conditions.go:105] duration metric: took 3.143918ms to run NodePressure ...
	I0719 04:34:27.770366  444204 start.go:241] waiting for startup goroutines ...
	I0719 04:34:27.770373  444204 start.go:246] waiting for cluster config update ...
	I0719 04:34:27.770390  444204 start.go:255] writing updated cluster config ...
	I0719 04:34:27.770756  444204 ssh_runner.go:195] Run: rm -f paused
	I0719 04:34:28.146154  444204 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 04:34:28.150172  444204 out.go:177] * Done! kubectl is now configured to use "addons-014077" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.377075319Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.404393125Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/65d2dbe4507bc7d117858afcca2e26e7beac1de28fde4df960fba5cc191c5858/merged/etc/passwd: no such file or directory"
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.404439861Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/65d2dbe4507bc7d117858afcca2e26e7beac1de28fde4df960fba5cc191c5858/merged/etc/group: no such file or directory"
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.462387323Z" level=info msg="Created container 5677cf77d98b721652fed3fea815a7f277a714ea4e7f011e52684b2b2d4e5bc3: default/hello-world-app-6778b5fc9f-6lgzw/hello-world-app" id=e0c74ff8-af83-40d0-ae76-212c7543d6b3 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.463647771Z" level=info msg="Starting container: 5677cf77d98b721652fed3fea815a7f277a714ea4e7f011e52684b2b2d4e5bc3" id=a4e968f6-8230-4f11-8b7d-e65a6fcc8b58 name=/runtime.v1.RuntimeService/StartContainer
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.481268177Z" level=info msg="Started container" PID=8270 containerID=5677cf77d98b721652fed3fea815a7f277a714ea4e7f011e52684b2b2d4e5bc3 description=default/hello-world-app-6778b5fc9f-6lgzw/hello-world-app id=a4e968f6-8230-4f11-8b7d-e65a6fcc8b58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=164550c89b55482e34a9d41a999418600c7cdcc82eea9fc2e3fa523119b9fd9e
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.655682120Z" level=info msg="Stopping pod sandbox: 137c3e9183bb1066add0f3247b73974f2173d7996ee01562938f28281aee88c3" id=6f9e9c89-1ce9-4687-ab06-a19c15354cb5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.655725393Z" level=info msg="Stopped pod sandbox (already stopped): 137c3e9183bb1066add0f3247b73974f2173d7996ee01562938f28281aee88c3" id=6f9e9c89-1ce9-4687-ab06-a19c15354cb5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.656198423Z" level=info msg="Removing pod sandbox: 137c3e9183bb1066add0f3247b73974f2173d7996ee01562938f28281aee88c3" id=21a72a53-cc1a-43db-9a05-d0e719339c7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:38:09 addons-014077 crio[955]: time="2024-07-19 04:38:09.663507315Z" level=info msg="Removed pod sandbox: 137c3e9183bb1066add0f3247b73974f2173d7996ee01562938f28281aee88c3" id=21a72a53-cc1a-43db-9a05-d0e719339c7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:38:10 addons-014077 crio[955]: time="2024-07-19 04:38:10.998132459Z" level=info msg="Stopping container: 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d (timeout: 2s)" id=ff8ff6ea-dc57-4a01-a9b7-0c61679c49cd name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.008328139Z" level=warning msg="Stopping container 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ff8ff6ea-dc57-4a01-a9b7-0c61679c49cd name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 04:38:13 addons-014077 conmon[4896]: conmon 53bf5ea7cac369f69705 <ninfo>: container 4907 exited with status 137
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.146603186Z" level=info msg="Stopped container 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d: ingress-nginx/ingress-nginx-controller-6d9bd977d4-hhzhm/controller" id=ff8ff6ea-dc57-4a01-a9b7-0c61679c49cd name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.147207339Z" level=info msg="Stopping pod sandbox: a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712" id=92d7ee9d-71e4-48b6-b1a9-60fc8f686da5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.151063658Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-HXFR2ZQ45JNKT5AS - [0:0]\n:KUBE-HP-O57J4KWZ36UJEPLG - [0:0]\n-X KUBE-HP-O57J4KWZ36UJEPLG\n-X KUBE-HP-HXFR2ZQ45JNKT5AS\nCOMMIT\n"
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.158924126Z" level=info msg="Closing host port tcp:80"
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.158979960Z" level=info msg="Closing host port tcp:443"
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.160626490Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.160658014Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.160861881Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6d9bd977d4-hhzhm Namespace:ingress-nginx ID:a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712 UID:01da5791-1cd6-42e8-be85-01f653c78ec1 NetNS:/var/run/netns/f8a34913-6e27-422e-a26d-b9b00ba0469c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.161003367Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6d9bd977d4-hhzhm from CNI network \"kindnet\" (type=ptp)"
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.188292900Z" level=info msg="Stopped pod sandbox: a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712" id=92d7ee9d-71e4-48b6-b1a9-60fc8f686da5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.288215107Z" level=info msg="Removing container: 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d" id=280d5df2-cdf6-422a-a395-ea6a19bf9bcf name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.302635218Z" level=info msg="Removed container 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d: ingress-nginx/ingress-nginx-controller-6d9bd977d4-hhzhm/controller" id=280d5df2-cdf6-422a-a395-ea6a19bf9bcf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5677cf77d98b7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        8 seconds ago       Running             hello-world-app           0                   164550c89b554       hello-world-app-6778b5fc9f-6lgzw
	c73156257c17e       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   94a6bb357355d       nginx
	033f70c67db54       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        3 minutes ago       Running             headlamp                  0                   f2a6bd02665c1       headlamp-7867546754-m7m9q
	c27274b47adda       296b5f799fcd8a39f0e93373bc18787d846c6a2a78a5657b1514831f043c09bf                                                             5 minutes ago       Exited              patch                     3                   854321e73e285       ingress-nginx-admission-patch-zzm6q
	149ec8654a558       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 5 minutes ago       Running             gcp-auth                  0                   c9e1618f979fe       gcp-auth-5db96cd9b4-2vhm2
	6cacd68e62b0f       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        5 minutes ago       Running             metrics-server            0                   87ecfe95f56e7       metrics-server-c59844bb4-6s6pb
	7f5573e7293ce       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago       Running             yakd                      0                   0ab3cb33d37f2       yakd-dashboard-799879c74f-m6mc8
	c930aaf789c04       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   5 minutes ago       Exited              create                    0                   82f4768f72682       ingress-nginx-admission-create-f4cmw
	47eb3c5df2ae6       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   c2fc3f29ccbd5       coredns-7db6d8ff4d-p5jz6
	0d31eb5555ec5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   1c8a39fac61db       storage-provisioner
	1507af39dbddd       5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2                                                             6 minutes ago       Running             kindnet-cni               0                   a0007a3207a35       kindnet-dl4zb
	e3ddf4b7cc27b       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                             6 minutes ago       Running             kube-proxy                0                   d7be84e46f2ba       kube-proxy-hqgw8
	fd7560349cf23       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                             7 minutes ago       Running             kube-apiserver            0                   74fde2c5c58b0       kube-apiserver-addons-014077
	118afcbd626f2       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   a69329f928f84       etcd-addons-014077
	6bf5bc299cd70       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                             7 minutes ago       Running             kube-controller-manager   0                   20d576ec51462       kube-controller-manager-addons-014077
	23e017a8ce2dc       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                             7 minutes ago       Running             kube-scheduler            0                   2af17c336b6a1       kube-scheduler-addons-014077
	
	
	==> coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] <==
	[INFO] 10.244.0.8:54880 - 15349 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002200552s
	[INFO] 10.244.0.8:45485 - 13803 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067289s
	[INFO] 10.244.0.8:45485 - 40951 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068068s
	[INFO] 10.244.0.8:36893 - 30763 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100691s
	[INFO] 10.244.0.8:36893 - 40228 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000056392s
	[INFO] 10.244.0.8:38493 - 1373 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047605s
	[INFO] 10.244.0.8:38493 - 47707 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008753s
	[INFO] 10.244.0.8:52826 - 21884 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006102s
	[INFO] 10.244.0.8:52826 - 22654 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102824s
	[INFO] 10.244.0.8:43570 - 8978 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001377713s
	[INFO] 10.244.0.8:43570 - 56351 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001613293s
	[INFO] 10.244.0.8:34405 - 8222 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011157s
	[INFO] 10.244.0.8:34405 - 52253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110364s
	[INFO] 10.244.0.19:52572 - 17396 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001345894s
	[INFO] 10.244.0.19:44051 - 33634 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001684175s
	[INFO] 10.244.0.19:47354 - 23295 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149954s
	[INFO] 10.244.0.19:34007 - 3216 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100576s
	[INFO] 10.244.0.19:53550 - 8188 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000213918s
	[INFO] 10.244.0.19:55000 - 25645 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000240511s
	[INFO] 10.244.0.19:53449 - 4553 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003487052s
	[INFO] 10.244.0.19:37648 - 23208 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002897577s
	[INFO] 10.244.0.19:47195 - 60096 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000732503s
	[INFO] 10.244.0.19:50474 - 14845 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000784071s
	[INFO] 10.244.0.22:40142 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000183601s
	[INFO] 10.244.0.22:42281 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103907s
	
	
	==> describe nodes <==
	Name:               addons-014077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-014077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=addons-014077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_31_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-014077
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:31:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-014077
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:36:11 +0000   Fri, 19 Jul 2024 04:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:36:11 +0000   Fri, 19 Jul 2024 04:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:36:11 +0000   Fri, 19 Jul 2024 04:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:36:11 +0000   Fri, 19 Jul 2024 04:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-014077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c3f755ec435453e838245b0ce3ffe74
	  System UUID:                9ee520c4-3132-4006-9e00-175e4d3922ed
	  Boot ID:                    7603d686-a653-4d15-b2a5-a492bcccfba1
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-6lgzw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-2vhm2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  headlamp                    headlamp-7867546754-m7m9q                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 coredns-7db6d8ff4d-p5jz6                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m59s
	  kube-system                 etcd-addons-014077                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m13s
	  kube-system                 kindnet-dl4zb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m59s
	  kube-system                 kube-apiserver-addons-014077             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-controller-manager-addons-014077    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-proxy-hqgw8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 kube-scheduler-addons-014077             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 metrics-server-c59844bb4-6s6pb           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m54s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	  yakd-dashboard              yakd-dashboard-799879c74f-m6mc8          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     6m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m53s                  kube-proxy       
	  Normal  Starting                 7m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m20s (x8 over 7m20s)  kubelet          Node addons-014077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s (x8 over 7m20s)  kubelet          Node addons-014077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s (x8 over 7m20s)  kubelet          Node addons-014077 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m13s                  kubelet          Node addons-014077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s                  kubelet          Node addons-014077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s                  kubelet          Node addons-014077 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m                     node-controller  Node addons-014077 event: Registered Node addons-014077 in Controller
	  Normal  NodeReady                6m15s                  kubelet          Node addons-014077 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000685] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000913] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=000000007d7c89ef
	[  +0.001018] FS-Cache: N-key=[8] '85cfc90000000000'
	[  +0.002691] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000940] FS-Cache: O-cookie d=00000000f601f7e6{9p.inode} n=00000000bbe59228
	[  +0.001041] FS-Cache: O-key=[8] '85cfc90000000000'
	[  +0.000697] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=00000000d3ec7a89
	[  +0.001028] FS-Cache: N-key=[8] '85cfc90000000000'
	[  +2.435117] FS-Cache: Duplicate cookie detected
	[  +0.000680] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000972] FS-Cache: O-cookie d=00000000f601f7e6{9p.inode} n=00000000e0a278fc
	[  +0.001019] FS-Cache: O-key=[8] '84cfc90000000000'
	[  +0.000743] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001013] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=0000000024481a06
	[  +0.001026] FS-Cache: N-key=[8] '84cfc90000000000'
	[  +0.383436] FS-Cache: Duplicate cookie detected
	[  +0.000706] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=00000000f601f7e6{9p.inode} n=00000000a8bac1f9
	[  +0.001038] FS-Cache: O-key=[8] '90cfc90000000000'
	[  +0.000738] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000917] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=000000007d7c89ef
	[  +0.001022] FS-Cache: N-key=[8] '90cfc90000000000'
	[Jul19 03:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] <==
	{"level":"info","ts":"2024-07-19T04:30:59.532737Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-07-19T04:30:59.554996Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T04:30:59.555534Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T04:30:59.555338Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T04:30:59.556614Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T04:30:59.556541Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T04:30:59.90248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T04:30:59.902598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T04:30:59.902642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-07-19T04:30:59.902691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.902727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.902768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.902802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.910641Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-014077 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T04:30:59.910745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:30:59.911892Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.92007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T04:30:59.92245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:30:59.934503Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.966886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.96696Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.968479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-19T04:30:59.935281Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T04:30:59.974553Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T04:31:22.571323Z","caller":"traceutil/trace.go:171","msg":"trace[1670416768] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"113.146053ms","start":"2024-07-19T04:31:22.451296Z","end":"2024-07-19T04:31:22.564442Z","steps":["trace[1670416768] 'process raft request'  (duration: 108.768611ms)"],"step_count":1}
	
	
	==> gcp-auth [149ec8654a55866748310882ab7e6d58110ad7a4453225a9ecff226dacbc5eba] <==
	2024/07/19 04:32:56 GCP Auth Webhook started!
	2024/07/19 04:34:29 Ready to marshal response ...
	2024/07/19 04:34:29 Ready to write response ...
	2024/07/19 04:34:29 Ready to marshal response ...
	2024/07/19 04:34:29 Ready to write response ...
	2024/07/19 04:34:29 Ready to marshal response ...
	2024/07/19 04:34:29 Ready to write response ...
	2024/07/19 04:34:39 Ready to marshal response ...
	2024/07/19 04:34:39 Ready to write response ...
	2024/07/19 04:34:45 Ready to marshal response ...
	2024/07/19 04:34:45 Ready to write response ...
	2024/07/19 04:34:45 Ready to marshal response ...
	2024/07/19 04:34:45 Ready to write response ...
	2024/07/19 04:34:53 Ready to marshal response ...
	2024/07/19 04:34:53 Ready to write response ...
	2024/07/19 04:35:04 Ready to marshal response ...
	2024/07/19 04:35:04 Ready to write response ...
	2024/07/19 04:35:25 Ready to marshal response ...
	2024/07/19 04:35:25 Ready to write response ...
	2024/07/19 04:35:47 Ready to marshal response ...
	2024/07/19 04:35:47 Ready to write response ...
	2024/07/19 04:38:07 Ready to marshal response ...
	2024/07/19 04:38:07 Ready to write response ...
	
	
	==> kernel <==
	 04:38:18 up  2:20,  0 users,  load average: 0.17, 1.27, 2.51
	Linux addons-014077 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] <==
	E0719 04:37:02.548401       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0719 04:37:02.741036       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:37:02.741074       1 main.go:303] handling current node
	I0719 04:37:12.740952       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:37:12.740986       1 main.go:303] handling current node
	I0719 04:37:22.741494       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:37:22.741611       1 main.go:303] handling current node
	I0719 04:37:32.741544       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:37:32.741583       1 main.go:303] handling current node
	W0719 04:37:35.048349       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0719 04:37:35.048385       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0719 04:37:42.740973       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:37:42.741009       1 main.go:303] handling current node
	W0719 04:37:46.750082       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:37:46.750120       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:37:48.289147       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:37:48.289180       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0719 04:37:52.741003       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:37:52.741039       1 main.go:303] handling current node
	I0719 04:38:02.740740       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:38:02.740776       1 main.go:303] handling current node
	I0719 04:38:12.740766       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:38:12.740801       1 main.go:303] handling current node
	W0719 04:38:13.696931       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0719 04:38:13.696966       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] <==
	E0719 04:33:53.980480       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0719 04:33:53.981595       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.130.223:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.130.223:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.130.223:443: connect: connection refused
	I0719 04:33:54.052371       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 04:34:29.039523       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.21.55"}
	E0719 04:35:09.215912       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0719 04:35:16.767225       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0719 04:35:41.908438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.908618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:41.932400       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.933583       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:41.960132       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.960184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:41.985519       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.985610       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:42.079606       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:42.079861       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:42.222376       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0719 04:35:42.960945       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0719 04:35:43.081290       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0719 04:35:43.155891       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0719 04:35:43.320889       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0719 04:35:47.721508       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0719 04:35:48.019984       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.251.227"}
	I0719 04:38:08.145261       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.155.184"}
	
	
	==> kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] <==
	W0719 04:36:51.733144       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:36:51.733196       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:36:58.396285       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:36:58.396328       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:36:59.115079       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:36:59.115191       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:37:38.807523       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:37:38.807559       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:37:44.147641       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:37:44.147678       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:37:48.833853       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:37:48.833889       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:37:50.032000       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:37:50.032044       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 04:38:07.936821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="45.862694ms"
	I0719 04:38:07.950921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="13.067145ms"
	I0719 04:38:07.952006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="31.384µs"
	I0719 04:38:07.955745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="39.277µs"
	I0719 04:38:09.966529       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0719 04:38:09.973424       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="4.964µs"
	I0719 04:38:09.976116       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0719 04:38:10.296901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="6.894444ms"
	I0719 04:38:10.296964       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="29.045µs"
	W0719 04:38:16.920075       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:38:16.920112       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] <==
	I0719 04:31:24.851587       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:31:25.063014       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0719 04:31:25.283660       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 04:31:25.283733       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:31:25.289457       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0719 04:31:25.289554       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0719 04:31:25.289600       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:31:25.289823       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:31:25.289872       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:31:25.308360       1 config.go:192] "Starting service config controller"
	I0719 04:31:25.308393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:31:25.308439       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:31:25.308444       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:31:25.310248       1 config.go:319] "Starting node config controller"
	I0719 04:31:25.310264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:31:25.408531       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:31:25.409459       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:31:25.410977       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] <==
	W0719 04:31:03.396970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:31:03.397041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:31:03.397132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:03.397171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:03.397254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:31:03.397305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:31:03.397417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:31:03.397430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:31:03.397506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:03.397517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:03.397577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 04:31:03.397587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 04:31:04.201787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:31:04.201921       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:31:04.326697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:31:04.326813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:31:04.334483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:31:04.334724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 04:31:04.382256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:31:04.382418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 04:31:04.401563       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 04:31:04.401681       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 04:31:04.435128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:31:04.435170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0719 04:31:07.085523       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 04:38:07 addons-014077 kubelet[1519]: E0719 04:38:07.933743    1519 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="765fc5f6-2088-4aa2-b3b8-fe74d12b1648" containerName="gadget"
	Jul 19 04:38:07 addons-014077 kubelet[1519]: I0719 04:38:07.933773    1519 memory_manager.go:354] "RemoveStaleState removing state" podUID="765fc5f6-2088-4aa2-b3b8-fe74d12b1648" containerName="gadget"
	Jul 19 04:38:08 addons-014077 kubelet[1519]: I0719 04:38:08.075292    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5n9c\" (UniqueName: \"kubernetes.io/projected/2fd3171a-cad1-41ce-99fb-b609e88b635f-kube-api-access-c5n9c\") pod \"hello-world-app-6778b5fc9f-6lgzw\" (UID: \"2fd3171a-cad1-41ce-99fb-b609e88b635f\") " pod="default/hello-world-app-6778b5fc9f-6lgzw"
	Jul 19 04:38:08 addons-014077 kubelet[1519]: I0719 04:38:08.075354    1519 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2fd3171a-cad1-41ce-99fb-b609e88b635f-gcp-creds\") pod \"hello-world-app-6778b5fc9f-6lgzw\" (UID: \"2fd3171a-cad1-41ce-99fb-b609e88b635f\") " pod="default/hello-world-app-6778b5fc9f-6lgzw"
	Jul 19 04:38:09 addons-014077 kubelet[1519]: I0719 04:38:09.188105    1519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqg8x\" (UniqueName: \"kubernetes.io/projected/788540eb-98b8-425c-bd6d-5c74eede8836-kube-api-access-hqg8x\") pod \"788540eb-98b8-425c-bd6d-5c74eede8836\" (UID: \"788540eb-98b8-425c-bd6d-5c74eede8836\") "
	Jul 19 04:38:09 addons-014077 kubelet[1519]: I0719 04:38:09.191541    1519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788540eb-98b8-425c-bd6d-5c74eede8836-kube-api-access-hqg8x" (OuterVolumeSpecName: "kube-api-access-hqg8x") pod "788540eb-98b8-425c-bd6d-5c74eede8836" (UID: "788540eb-98b8-425c-bd6d-5c74eede8836"). InnerVolumeSpecName "kube-api-access-hqg8x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 04:38:09 addons-014077 kubelet[1519]: I0719 04:38:09.266245    1519 scope.go:117] "RemoveContainer" containerID="bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33"
	Jul 19 04:38:09 addons-014077 kubelet[1519]: I0719 04:38:09.290726    1519 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hqg8x\" (UniqueName: \"kubernetes.io/projected/788540eb-98b8-425c-bd6d-5c74eede8836-kube-api-access-hqg8x\") on node \"addons-014077\" DevicePath \"\""
	Jul 19 04:38:09 addons-014077 kubelet[1519]: I0719 04:38:09.305729    1519 scope.go:117] "RemoveContainer" containerID="bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33"
	Jul 19 04:38:09 addons-014077 kubelet[1519]: E0719 04:38:09.307249    1519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33\": container with ID starting with bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33 not found: ID does not exist" containerID="bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33"
	Jul 19 04:38:09 addons-014077 kubelet[1519]: I0719 04:38:09.307449    1519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33"} err="failed to get container status \"bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33\": rpc error: code = NotFound desc = could not find container \"bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33\": container with ID starting with bea9b8cc7e2bbca2008f8042386142056e37cc538143f75dedff57cef7ee2f33 not found: ID does not exist"
	Jul 19 04:38:09 addons-014077 kubelet[1519]: I0719 04:38:09.894425    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788540eb-98b8-425c-bd6d-5c74eede8836" path="/var/lib/kubelet/pods/788540eb-98b8-425c-bd6d-5c74eede8836/volumes"
	Jul 19 04:38:11 addons-014077 kubelet[1519]: I0719 04:38:11.893623    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd7bb0c2-f289-4a56-bfa7-d4ae102a9b8f" path="/var/lib/kubelet/pods/cd7bb0c2-f289-4a56-bfa7-d4ae102a9b8f/volumes"
	Jul 19 04:38:11 addons-014077 kubelet[1519]: I0719 04:38:11.894054    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf2d921-56ca-437e-a60a-57cb41676519" path="/var/lib/kubelet/pods/daf2d921-56ca-437e-a60a-57cb41676519/volumes"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.286694    1519 scope.go:117] "RemoveContainer" containerID="53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.302882    1519 scope.go:117] "RemoveContainer" containerID="53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: E0719 04:38:13.303290    1519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d\": container with ID starting with 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d not found: ID does not exist" containerID="53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.303330    1519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"} err="failed to get container status \"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d\": rpc error: code = NotFound desc = could not find container \"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d\": container with ID starting with 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d not found: ID does not exist"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.313968    1519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wnl5\" (UniqueName: \"kubernetes.io/projected/01da5791-1cd6-42e8-be85-01f653c78ec1-kube-api-access-7wnl5\") pod \"01da5791-1cd6-42e8-be85-01f653c78ec1\" (UID: \"01da5791-1cd6-42e8-be85-01f653c78ec1\") "
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.314029    1519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01da5791-1cd6-42e8-be85-01f653c78ec1-webhook-cert\") pod \"01da5791-1cd6-42e8-be85-01f653c78ec1\" (UID: \"01da5791-1cd6-42e8-be85-01f653c78ec1\") "
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.316517    1519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01da5791-1cd6-42e8-be85-01f653c78ec1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "01da5791-1cd6-42e8-be85-01f653c78ec1" (UID: "01da5791-1cd6-42e8-be85-01f653c78ec1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.319091    1519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01da5791-1cd6-42e8-be85-01f653c78ec1-kube-api-access-7wnl5" (OuterVolumeSpecName: "kube-api-access-7wnl5") pod "01da5791-1cd6-42e8-be85-01f653c78ec1" (UID: "01da5791-1cd6-42e8-be85-01f653c78ec1"). InnerVolumeSpecName "kube-api-access-7wnl5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.414647    1519 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7wnl5\" (UniqueName: \"kubernetes.io/projected/01da5791-1cd6-42e8-be85-01f653c78ec1-kube-api-access-7wnl5\") on node \"addons-014077\" DevicePath \"\""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.414685    1519 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01da5791-1cd6-42e8-be85-01f653c78ec1-webhook-cert\") on node \"addons-014077\" DevicePath \"\""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.893854    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01da5791-1cd6-42e8-be85-01f653c78ec1" path="/var/lib/kubelet/pods/01da5791-1cd6-42e8-be85-01f653c78ec1/volumes"
	
	
	==> storage-provisioner [0d31eb5555ec5997398ddfc6570a006b5fa2a1f15a5bae69e32f41d81d50c4c5] <==
	I0719 04:32:03.977001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 04:32:04.044050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 04:32:04.044170       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 04:32:04.143880       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 04:32:04.144168       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-014077_d4190dd4-3867-4cea-83e4-0ad52c429814!
	I0719 04:32:04.163730       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fadddc2b-9cda-4345-a2a3-97b816911dce", APIVersion:"v1", ResourceVersion:"942", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-014077_d4190dd4-3867-4cea-83e4-0ad52c429814 became leader
	I0719 04:32:04.244979       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-014077_d4190dd4-3867-4cea-83e4-0ad52c429814!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-014077 -n addons-014077
helpers_test.go:261: (dbg) Run:  kubectl --context addons-014077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.18s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (286.14s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.948243ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-6s6pb" [f1e51548-a1be-4356-a620-a46631404c83] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00497663s
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (96.298851ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 4m29.529602412s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (91.449607ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 4m33.537939409s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (87.024672ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 4m38.370462306s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (88.770761ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 4m44.235160747s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (104.827227ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 4m58.337420478s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (102.564257ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 5m16.274976354s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (88.82924ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 5m47.724024054s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (92.73829ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 6m29.483307899s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (112.866615ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 6m59.520370145s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (91.073666ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 8m25.451962392s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-014077 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-014077 top pods -n kube-system: exit status 1 (89.453518ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-p5jz6, age: 9m6.376302478s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-014077
helpers_test.go:235: (dbg) docker inspect addons-014077:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498",
	        "Created": "2024-07-19T04:30:45.424624926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 444736,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-19T04:30:45.576248748Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2c91a2178aa1acdb3eade350c62303b0cf135b362b91c6aa21cd060c2dbfcac",
	        "ResolvConfPath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/hostname",
	        "HostsPath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/hosts",
	        "LogPath": "/var/lib/docker/containers/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498/b73d64c37adb33efbd5cf1b6f7334293eab845368faed6405b7f2adb23c67498-json.log",
	        "Name": "/addons-014077",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-014077:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-014077",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55-init/diff:/var/lib/docker/overlay2/dcda698d7750c866c9c7e796269374bca18e6015fe6311f8c109dc57f1eac077/diff",
	                "MergedDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55/merged",
	                "UpperDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55/diff",
	                "WorkDir": "/var/lib/docker/overlay2/929455911d926a5cf5a855b18d923336d1f8289094a1046fa1839e84eddd6c55/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-014077",
	                "Source": "/var/lib/docker/volumes/addons-014077/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-014077",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-014077",
	                "name.minikube.sigs.k8s.io": "addons-014077",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "182e0f89ccf661fa762f8d194caa1bf945b2b44445e06e7e292c6ae4fc1c63fb",
	            "SandboxKey": "/var/run/docker/netns/182e0f89ccf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-014077": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "dd06f12297e2f764538135a4f71c8ffc11141139f2efc3df4f09c713ecba82d7",
	                    "EndpointID": "669977e504b424d4c980c7c955831ccae23c9fa3a732c705a48fcfaabe9fc350",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-014077",
	                        "b73d64c37adb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-014077 -n addons-014077
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-014077 logs -n 25: (1.580174169s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-596201                                                                     | download-only-596201   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-248286                                                                     | download-only-248286   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-521092                                                                     | download-only-521092   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| start   | --download-only -p                                                                          | download-docker-335826 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | download-docker-335826                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-335826                                                                   | download-docker-335826 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-028250   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | binary-mirror-028250                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33677                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-028250                                                                     | binary-mirror-028250   | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| addons  | enable dashboard -p                                                                         | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-014077 --wait=true                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | -p addons-014077                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-014077 ip                                                                            | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | -p addons-014077                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-014077 ssh cat                                                                       | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | /opt/local-path-provisioner/pvc-b5821a76-1b15-48b8-80bb-7ba2cf9bbdd9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:34 UTC | 19 Jul 24 04:34 UTC |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| addons  | addons-014077 addons                                                                        | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC | 19 Jul 24 04:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-014077 addons                                                                        | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC | 19 Jul 24 04:35 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC | 19 Jul 24 04:35 UTC |
	|         | addons-014077                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-014077 ssh curl -s                                                                   | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-014077 ip                                                                            | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:38 UTC | 19 Jul 24 04:38 UTC |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:38 UTC | 19 Jul 24 04:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-014077 addons disable                                                                | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:38 UTC | 19 Jul 24 04:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-014077 addons                                                                        | addons-014077          | jenkins | v1.33.1 | 19 Jul 24 04:40 UTC | 19 Jul 24 04:40 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:30:20
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:30:20.802949  444204 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:20.803104  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:20.803113  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:20.803118  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:20.803547  444204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:30:20.804016  444204 out.go:298] Setting JSON to false
	I0719 04:30:20.804969  444204 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7966,"bootTime":1721355455,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 04:30:20.805042  444204 start.go:139] virtualization:  
	I0719 04:30:20.807518  444204 out.go:177] * [addons-014077] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0719 04:30:20.809708  444204 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:30:20.809772  444204 notify.go:220] Checking for updates...
	I0719 04:30:20.813226  444204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:30:20.814832  444204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:30:20.816486  444204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 04:30:20.818398  444204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0719 04:30:20.820394  444204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:30:20.822749  444204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:30:20.846594  444204 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:30:20.846724  444204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:20.909817  444204 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-19 04:30:20.900026691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:20.909927  444204 docker.go:307] overlay module found
	I0719 04:30:20.912152  444204 out.go:177] * Using the docker driver based on user configuration
	I0719 04:30:20.914082  444204 start.go:297] selected driver: docker
	I0719 04:30:20.914097  444204 start.go:901] validating driver "docker" against <nil>
	I0719 04:30:20.914110  444204 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:30:20.916008  444204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:20.965900  444204 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-19 04:30:20.956338267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:20.966081  444204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:30:20.966315  444204 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:30:20.968339  444204 out.go:177] * Using Docker driver with root privileges
	I0719 04:30:20.970373  444204 cni.go:84] Creating CNI manager for ""
	I0719 04:30:20.970406  444204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:30:20.970417  444204 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:30:20.970668  444204 start.go:340] cluster config:
	{Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:30:20.973996  444204 out.go:177] * Starting "addons-014077" primary control-plane node in "addons-014077" cluster
	I0719 04:30:20.975824  444204 cache.go:121] Beginning downloading kic base image for docker with crio
	I0719 04:30:20.977567  444204 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0719 04:30:20.979239  444204 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:20.979264  444204 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 04:30:20.979289  444204 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0719 04:30:20.979298  444204 cache.go:56] Caching tarball of preloaded images
	I0719 04:30:20.979378  444204 preload.go:172] Found /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0719 04:30:20.979388  444204 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:30:20.979742  444204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/config.json ...
	I0719 04:30:20.979819  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/config.json: {Name:mkee8ab7b1c9c5d1f3baa8814ec326c921c9f362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:20.994400  444204 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 04:30:20.994546  444204 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 04:30:20.994569  444204 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 04:30:20.994574  444204 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 04:30:20.994584  444204 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 04:30:20.994591  444204 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0719 04:30:37.894598  444204 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0719 04:30:37.894638  444204 cache.go:194] Successfully downloaded all kic artifacts
	I0719 04:30:37.894739  444204 start.go:360] acquireMachinesLock for addons-014077: {Name:mk616a464a7e762d13268277321c4ef16174e532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:30:37.894865  444204 start.go:364] duration metric: took 102.439µs to acquireMachinesLock for "addons-014077"
	I0719 04:30:37.894899  444204 start.go:93] Provisioning new machine with config: &{Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:30:37.894993  444204 start.go:125] createHost starting for "" (driver="docker")
	I0719 04:30:37.897357  444204 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0719 04:30:37.897602  444204 start.go:159] libmachine.API.Create for "addons-014077" (driver="docker")
	I0719 04:30:37.897636  444204 client.go:168] LocalClient.Create starting
	I0719 04:30:37.897749  444204 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem
	I0719 04:30:38.308142  444204 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem
	I0719 04:30:38.692879  444204 cli_runner.go:164] Run: docker network inspect addons-014077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0719 04:30:38.707708  444204 cli_runner.go:211] docker network inspect addons-014077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0719 04:30:38.707815  444204 network_create.go:284] running [docker network inspect addons-014077] to gather additional debugging logs...
	I0719 04:30:38.707840  444204 cli_runner.go:164] Run: docker network inspect addons-014077
	W0719 04:30:38.722196  444204 cli_runner.go:211] docker network inspect addons-014077 returned with exit code 1
	I0719 04:30:38.722237  444204 network_create.go:287] error running [docker network inspect addons-014077]: docker network inspect addons-014077: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-014077 not found
	I0719 04:30:38.722251  444204 network_create.go:289] output of [docker network inspect addons-014077]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-014077 not found
	
	** /stderr **
	I0719 04:30:38.722349  444204 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 04:30:38.739974  444204 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017badb0}
	I0719 04:30:38.740018  444204 network_create.go:124] attempt to create docker network addons-014077 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0719 04:30:38.740080  444204 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-014077 addons-014077
	I0719 04:30:38.812132  444204 network_create.go:108] docker network addons-014077 192.168.49.0/24 created
	I0719 04:30:38.812179  444204 kic.go:121] calculated static IP "192.168.49.2" for the "addons-014077" container
	I0719 04:30:38.812257  444204 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0719 04:30:38.826967  444204 cli_runner.go:164] Run: docker volume create addons-014077 --label name.minikube.sigs.k8s.io=addons-014077 --label created_by.minikube.sigs.k8s.io=true
	I0719 04:30:38.842896  444204 oci.go:103] Successfully created a docker volume addons-014077
	I0719 04:30:38.843006  444204 cli_runner.go:164] Run: docker run --rm --name addons-014077-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-014077 --entrypoint /usr/bin/test -v addons-014077:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0719 04:30:40.942642  444204 cli_runner.go:217] Completed: docker run --rm --name addons-014077-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-014077 --entrypoint /usr/bin/test -v addons-014077:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib: (2.099583579s)
	I0719 04:30:40.942673  444204 oci.go:107] Successfully prepared a docker volume addons-014077
	I0719 04:30:40.942698  444204 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:40.942717  444204 kic.go:194] Starting extracting preloaded images to volume ...
	I0719 04:30:40.942809  444204 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-014077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0719 04:30:45.354497  444204 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-014077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir: (4.411645562s)
	I0719 04:30:45.354529  444204 kic.go:203] duration metric: took 4.411809267s to extract preloaded images to volume ...
	W0719 04:30:45.354673  444204 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0719 04:30:45.354788  444204 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0719 04:30:45.408999  444204 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-014077 --name addons-014077 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-014077 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-014077 --network addons-014077 --ip 192.168.49.2 --volume addons-014077:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f
	I0719 04:30:45.731752  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Running}}
	I0719 04:30:45.759468  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:30:45.786413  444204 cli_runner.go:164] Run: docker exec addons-014077 stat /var/lib/dpkg/alternatives/iptables
	I0719 04:30:45.848103  444204 oci.go:144] the created container "addons-014077" has a running status.
	I0719 04:30:45.848133  444204 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa...
	I0719 04:30:46.279285  444204 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0719 04:30:46.305618  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:30:46.330312  444204 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0719 04:30:46.330330  444204 kic_runner.go:114] Args: [docker exec --privileged addons-014077 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0719 04:30:46.395506  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:30:46.423233  444204 machine.go:94] provisionDockerMachine start ...
	I0719 04:30:46.423326  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:46.447546  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:46.447806  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:46.447816  444204 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:30:46.614952  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-014077
	
	I0719 04:30:46.615024  444204 ubuntu.go:169] provisioning hostname "addons-014077"
	I0719 04:30:46.615126  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:46.640795  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:46.641188  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:46.641204  444204 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-014077 && echo "addons-014077" | sudo tee /etc/hostname
	I0719 04:30:46.780377  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-014077
	
	I0719 04:30:46.780464  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:46.799449  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:46.799689  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:46.799706  444204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-014077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-014077/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-014077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:30:46.926490  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:30:46.926517  444204 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19302-437615/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-437615/.minikube}
	I0719 04:30:46.926547  444204 ubuntu.go:177] setting up certificates
	I0719 04:30:46.926563  444204 provision.go:84] configureAuth start
	I0719 04:30:46.926629  444204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-014077
	I0719 04:30:46.944074  444204 provision.go:143] copyHostCerts
	I0719 04:30:46.944163  444204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-437615/.minikube/ca.pem (1078 bytes)
	I0719 04:30:46.944297  444204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-437615/.minikube/cert.pem (1123 bytes)
	I0719 04:30:46.944354  444204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-437615/.minikube/key.pem (1675 bytes)
	I0719 04:30:46.944409  444204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-437615/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca-key.pem org=jenkins.addons-014077 san=[127.0.0.1 192.168.49.2 addons-014077 localhost minikube]
	I0719 04:30:47.276347  444204 provision.go:177] copyRemoteCerts
	I0719 04:30:47.276418  444204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:30:47.276462  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.294117  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.383353  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 04:30:47.408771  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:30:47.432964  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:30:47.457784  444204 provision.go:87] duration metric: took 531.204388ms to configureAuth
	I0719 04:30:47.457814  444204 ubuntu.go:193] setting minikube options for container-runtime
	I0719 04:30:47.458009  444204 config.go:182] Loaded profile config "addons-014077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:30:47.458147  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.474517  444204 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:47.474767  444204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0719 04:30:47.474790  444204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:30:47.701211  444204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:30:47.701242  444204 machine.go:97] duration metric: took 1.277985337s to provisionDockerMachine
	I0719 04:30:47.701254  444204 client.go:171] duration metric: took 9.803607835s to LocalClient.Create
	I0719 04:30:47.701266  444204 start.go:167] duration metric: took 9.80366444s to libmachine.API.Create "addons-014077"
	I0719 04:30:47.701273  444204 start.go:293] postStartSetup for "addons-014077" (driver="docker")
	I0719 04:30:47.701284  444204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:30:47.701362  444204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:30:47.701407  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.718679  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.812177  444204 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:30:47.815395  444204 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0719 04:30:47.815480  444204 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0719 04:30:47.815507  444204 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0719 04:30:47.815543  444204 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0719 04:30:47.815574  444204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-437615/.minikube/addons for local assets ...
	I0719 04:30:47.815681  444204 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-437615/.minikube/files for local assets ...
	I0719 04:30:47.815744  444204 start.go:296] duration metric: took 114.46386ms for postStartSetup
	I0719 04:30:47.816136  444204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-014077
	I0719 04:30:47.832623  444204 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/config.json ...
	I0719 04:30:47.832914  444204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:47.832960  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.849429  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.935084  444204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0719 04:30:47.939554  444204 start.go:128] duration metric: took 10.044544438s to createHost
	I0719 04:30:47.939580  444204 start.go:83] releasing machines lock for "addons-014077", held for 10.044698897s
	I0719 04:30:47.939654  444204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-014077
	I0719 04:30:47.955255  444204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:30:47.955347  444204 ssh_runner.go:195] Run: cat /version.json
	I0719 04:30:47.955380  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.955518  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:30:47.979824  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:47.983706  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:30:48.190958  444204 ssh_runner.go:195] Run: systemctl --version
	I0719 04:30:48.195452  444204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:30:48.336715  444204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:30:48.341590  444204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:30:48.363714  444204 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0719 04:30:48.363790  444204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:30:48.395762  444204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0719 04:30:48.395783  444204 start.go:495] detecting cgroup driver to use...
	I0719 04:30:48.395816  444204 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0719 04:30:48.395867  444204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:30:48.413430  444204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:30:48.425421  444204 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:30:48.425541  444204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:30:48.440755  444204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:30:48.455758  444204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:30:48.546854  444204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:30:48.643462  444204 docker.go:233] disabling docker service ...
	I0719 04:30:48.643539  444204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:30:48.664500  444204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:30:48.676433  444204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:30:48.770823  444204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:30:48.867659  444204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:30:48.879768  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:30:48.896659  444204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:30:48.896729  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.907098  444204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:30:48.907178  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.917411  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.927160  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.938148  444204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:30:48.948796  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.959023  444204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.976415  444204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:30:48.986825  444204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:30:48.995565  444204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:30:49.005142  444204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:30:49.085548  444204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:30:49.202816  444204 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:30:49.202958  444204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:30:49.206417  444204 start.go:563] Will wait 60s for crictl version
	I0719 04:30:49.206546  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:30:49.209853  444204 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:30:49.253589  444204 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0719 04:30:49.253689  444204 ssh_runner.go:195] Run: crio --version
	I0719 04:30:49.291861  444204 ssh_runner.go:195] Run: crio --version
	I0719 04:30:49.338554  444204 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.24.6 ...
	I0719 04:30:49.340467  444204 cli_runner.go:164] Run: docker network inspect addons-014077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0719 04:30:49.355844  444204 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0719 04:30:49.359485  444204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:30:49.370977  444204 kubeadm.go:883] updating cluster {Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:30:49.371106  444204 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:49.371166  444204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:30:49.447838  444204 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:30:49.447862  444204 crio.go:433] Images already preloaded, skipping extraction
	I0719 04:30:49.447919  444204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:30:49.483791  444204 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:30:49.483814  444204 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:30:49.483822  444204 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 crio true true} ...
	I0719 04:30:49.483931  444204 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-014077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:30:49.484013  444204 ssh_runner.go:195] Run: crio config
	I0719 04:30:49.537701  444204 cni.go:84] Creating CNI manager for ""
	I0719 04:30:49.537726  444204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:30:49.537736  444204 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:30:49.537759  444204 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-014077 NodeName:addons-014077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:30:49.537907  444204 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-014077"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:30:49.537979  444204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:30:49.547309  444204 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:30:49.547438  444204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 04:30:49.556265  444204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0719 04:30:49.574546  444204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:30:49.593388  444204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0719 04:30:49.612012  444204 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0719 04:30:49.615486  444204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:30:49.626545  444204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:30:49.708718  444204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:30:49.722745  444204 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077 for IP: 192.168.49.2
	I0719 04:30:49.722771  444204 certs.go:194] generating shared ca certs ...
	I0719 04:30:49.722787  444204 certs.go:226] acquiring lock for ca certs: {Name:mka5df50fae162dd91003b3c847084951b043e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:49.722920  444204 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key
	I0719 04:30:50.294156  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt ...
	I0719 04:30:50.294192  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt: {Name:mk8ac4967e1da44eed49d1fa6eec2d763c8c81b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.294393  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key ...
	I0719 04:30:50.294408  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key: {Name:mk5d1e1346fbfb309ccf6d4beebe0758d3d62000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.294527  444204 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key
	I0719 04:30:50.539424  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.crt ...
	I0719 04:30:50.539456  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.crt: {Name:mk1acb8d6a21cc45d0e0a6fc2023765575aabb27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.539669  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key ...
	I0719 04:30:50.539685  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key: {Name:mka21f1117bec7af2998ac10a45eb4cd14bd52b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.539780  444204 certs.go:256] generating profile certs ...
	I0719 04:30:50.539852  444204 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.key
	I0719 04:30:50.539872  444204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt with IP's: []
	I0719 04:30:50.978372  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt ...
	I0719 04:30:50.978406  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: {Name:mk8cebfc96bc64b731889b761fee19626bad3c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.978663  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.key ...
	I0719 04:30:50.978680  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.key: {Name:mk2b85bdf8ae2d2f7b417bc8f7d652d47cc7966d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:50.978807  444204 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe
	I0719 04:30:50.978831  444204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0719 04:30:51.424941  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe ...
	I0719 04:30:51.424972  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe: {Name:mk946ec8e4faa816afdeb1a978f1189b66bbb20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:51.425165  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe ...
	I0719 04:30:51.425181  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe: {Name:mk25c360448449a4cc30b83e4d5ab6a0542b472b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:51.425275  444204 certs.go:381] copying /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt.38301cbe -> /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt
	I0719 04:30:51.425356  444204 certs.go:385] copying /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key.38301cbe -> /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key
	I0719 04:30:51.425410  444204 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key
	I0719 04:30:51.425430  444204 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt with IP's: []
	I0719 04:30:52.030496  444204 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt ...
	I0719 04:30:52.030531  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt: {Name:mka9b3edf7bea73488c77469cac7fd32772f8a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:52.030719  444204 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key ...
	I0719 04:30:52.030738  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key: {Name:mk955c77244fa45e611df07f07790a16c6a3d13a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:30:52.030927  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:30:52.030974  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/ca.pem (1078 bytes)
	I0719 04:30:52.031011  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:30:52.031050  444204 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-437615/.minikube/certs/key.pem (1675 bytes)
	I0719 04:30:52.031702  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:30:52.057889  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 04:30:52.086532  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:30:52.115393  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:30:52.140100  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 04:30:52.165051  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:30:52.189785  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:30:52.213627  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:30:52.239275  444204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:30:52.265398  444204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:30:52.285005  444204 ssh_runner.go:195] Run: openssl version
	I0719 04:30:52.291045  444204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:30:52.301428  444204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:30:52.305383  444204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 04:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:30:52.305453  444204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:30:52.312694  444204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:30:52.322940  444204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:30:52.327194  444204 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:30:52.327282  444204 kubeadm.go:392] StartCluster: {Name:addons-014077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-014077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:30:52.327390  444204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:30:52.327476  444204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:30:52.367305  444204 cri.go:89] found id: ""
	I0719 04:30:52.367411  444204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 04:30:52.376760  444204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 04:30:52.385747  444204 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0719 04:30:52.385842  444204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 04:30:52.394746  444204 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 04:30:52.394767  444204 kubeadm.go:157] found existing configuration files:
	
	I0719 04:30:52.394854  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 04:30:52.403763  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 04:30:52.403856  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 04:30:52.412360  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 04:30:52.420823  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 04:30:52.420913  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 04:30:52.429492  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 04:30:52.438422  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 04:30:52.438594  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 04:30:52.447001  444204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 04:30:52.455706  444204 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 04:30:52.455789  444204 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 04:30:52.464232  444204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0719 04:30:52.509359  444204 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 04:30:52.509666  444204 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 04:30:52.555708  444204 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0719 04:30:52.555780  444204 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1064-aws
	I0719 04:30:52.555821  444204 kubeadm.go:310] OS: Linux
	I0719 04:30:52.555870  444204 kubeadm.go:310] CGROUPS_CPU: enabled
	I0719 04:30:52.555921  444204 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0719 04:30:52.555972  444204 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0719 04:30:52.556022  444204 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0719 04:30:52.556073  444204 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0719 04:30:52.556124  444204 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0719 04:30:52.556178  444204 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0719 04:30:52.556232  444204 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0719 04:30:52.556281  444204 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0719 04:30:52.623047  444204 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 04:30:52.623157  444204 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 04:30:52.623252  444204 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 04:30:52.884007  444204 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 04:30:52.887181  444204 out.go:204]   - Generating certificates and keys ...
	I0719 04:30:52.887360  444204 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 04:30:52.887473  444204 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 04:30:53.125184  444204 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 04:30:53.471778  444204 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 04:30:54.069542  444204 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 04:30:54.374981  444204 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 04:30:54.573548  444204 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 04:30:54.573846  444204 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-014077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0719 04:30:54.756171  444204 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 04:30:54.756373  444204 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-014077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0719 04:30:55.470784  444204 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 04:30:55.840287  444204 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 04:30:56.107252  444204 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 04:30:56.107533  444204 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 04:30:56.609068  444204 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 04:30:56.785031  444204 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 04:30:56.982557  444204 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 04:30:57.204466  444204 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 04:30:57.515904  444204 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 04:30:57.517440  444204 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 04:30:57.521524  444204 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 04:30:57.523763  444204 out.go:204]   - Booting up control plane ...
	I0719 04:30:57.523867  444204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 04:30:57.523944  444204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 04:30:57.524839  444204 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 04:30:57.534855  444204 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 04:30:57.535939  444204 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 04:30:57.535989  444204 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 04:30:57.629075  444204 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 04:30:57.629162  444204 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 04:30:58.631770  444204 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002793621s
	I0719 04:30:58.631858  444204 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 04:31:05.136288  444204 kubeadm.go:310] [api-check] The API server is healthy after 6.502078961s
	I0719 04:31:05.157934  444204 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 04:31:05.179119  444204 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 04:31:05.206555  444204 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 04:31:05.206747  444204 kubeadm.go:310] [mark-control-plane] Marking the node addons-014077 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 04:31:05.218310  444204 kubeadm.go:310] [bootstrap-token] Using token: wnhp1p.6qfiwt67coucume8
	I0719 04:31:05.220299  444204 out.go:204]   - Configuring RBAC rules ...
	I0719 04:31:05.220438  444204 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 04:31:05.229076  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 04:31:05.239703  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 04:31:05.244861  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 04:31:05.249663  444204 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 04:31:05.255274  444204 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 04:31:05.544067  444204 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 04:31:05.972060  444204 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 04:31:06.544442  444204 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 04:31:06.545794  444204 kubeadm.go:310] 
	I0719 04:31:06.545874  444204 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 04:31:06.545881  444204 kubeadm.go:310] 
	I0719 04:31:06.545956  444204 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 04:31:06.545967  444204 kubeadm.go:310] 
	I0719 04:31:06.546008  444204 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 04:31:06.546090  444204 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 04:31:06.546145  444204 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 04:31:06.546156  444204 kubeadm.go:310] 
	I0719 04:31:06.546209  444204 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 04:31:06.546218  444204 kubeadm.go:310] 
	I0719 04:31:06.546263  444204 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 04:31:06.546271  444204 kubeadm.go:310] 
	I0719 04:31:06.546321  444204 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 04:31:06.546396  444204 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 04:31:06.546478  444204 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 04:31:06.546488  444204 kubeadm.go:310] 
	I0719 04:31:06.546569  444204 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 04:31:06.546645  444204 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 04:31:06.546654  444204 kubeadm.go:310] 
	I0719 04:31:06.546734  444204 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wnhp1p.6qfiwt67coucume8 \
	I0719 04:31:06.546836  444204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:adc12064acfd4256056a937f24df92377801baea1f8829f0d6ba89254df1b00b \
	I0719 04:31:06.546860  444204 kubeadm.go:310] 	--control-plane 
	I0719 04:31:06.546868  444204 kubeadm.go:310] 
	I0719 04:31:06.546949  444204 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 04:31:06.546957  444204 kubeadm.go:310] 
	I0719 04:31:06.547036  444204 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wnhp1p.6qfiwt67coucume8 \
	I0719 04:31:06.547136  444204 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:adc12064acfd4256056a937f24df92377801baea1f8829f0d6ba89254df1b00b 
	I0719 04:31:06.550389  444204 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1064-aws\n", err: exit status 1
	I0719 04:31:06.550523  444204 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 04:31:06.550547  444204 cni.go:84] Creating CNI manager for ""
	I0719 04:31:06.550556  444204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:31:06.552748  444204 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 04:31:06.554885  444204 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 04:31:06.558858  444204 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 04:31:06.558881  444204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 04:31:06.578965  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 04:31:06.841676  444204 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 04:31:06.841816  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:06.841898  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-014077 minikube.k8s.io/updated_at=2024_07_19T04_31_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=addons-014077 minikube.k8s.io/primary=true
	I0719 04:31:06.983121  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:06.983235  444204 ops.go:34] apiserver oom_adj: -16
	I0719 04:31:07.483652  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:07.983278  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:08.483306  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:08.983220  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:09.483673  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:09.983303  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:10.484100  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:10.983375  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:11.483412  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:11.984135  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:12.483794  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:12.983280  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:13.483871  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:13.983646  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:14.483269  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:14.983252  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:15.483841  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:15.984010  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:16.484237  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:16.983606  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:17.484007  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:17.983257  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:18.483936  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:18.983255  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:19.484016  444204 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:19.608269  444204 kubeadm.go:1113] duration metric: took 12.766502447s to wait for elevateKubeSystemPrivileges
	I0719 04:31:19.608305  444204 kubeadm.go:394] duration metric: took 27.281027081s to StartCluster
	I0719 04:31:19.608323  444204 settings.go:142] acquiring lock: {Name:mkd73071bbdd6758849d0c7992cd9bb0e7ebcdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:19.608429  444204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:31:19.609288  444204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-437615/kubeconfig: {Name:mk1a12c3f020bf8e8853640f940fd53850952b4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:19.609852  444204 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:31:19.610571  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 04:31:19.610880  444204 config.go:182] Loaded profile config "addons-014077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:19.610925  444204 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 04:31:19.611310  444204 addons.go:69] Setting yakd=true in profile "addons-014077"
	I0719 04:31:19.611343  444204 addons.go:234] Setting addon yakd=true in "addons-014077"
	I0719 04:31:19.611560  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.612262  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.612695  444204 out.go:177] * Verifying Kubernetes components...
	I0719 04:31:19.613407  444204 addons.go:69] Setting metrics-server=true in profile "addons-014077"
	I0719 04:31:19.613461  444204 addons.go:234] Setting addon metrics-server=true in "addons-014077"
	I0719 04:31:19.613496  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.613507  444204 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-014077"
	I0719 04:31:19.613534  444204 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-014077"
	I0719 04:31:19.613568  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.614047  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.614143  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.615321  444204 addons.go:69] Setting registry=true in profile "addons-014077"
	I0719 04:31:19.615365  444204 addons.go:234] Setting addon registry=true in "addons-014077"
	I0719 04:31:19.615393  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.615849  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.634539  444204 addons.go:69] Setting cloud-spanner=true in profile "addons-014077"
	I0719 04:31:19.638116  444204 addons.go:234] Setting addon cloud-spanner=true in "addons-014077"
	I0719 04:31:19.638189  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.638729  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.636639  444204 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-014077"
	I0719 04:31:19.643098  444204 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-014077"
	I0719 04:31:19.643264  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.636652  444204 addons.go:69] Setting default-storageclass=true in profile "addons-014077"
	I0719 04:31:19.636657  444204 addons.go:69] Setting gcp-auth=true in profile "addons-014077"
	I0719 04:31:19.636660  444204 addons.go:69] Setting ingress=true in profile "addons-014077"
	I0719 04:31:19.636664  444204 addons.go:69] Setting ingress-dns=true in profile "addons-014077"
	I0719 04:31:19.636677  444204 addons.go:69] Setting inspektor-gadget=true in profile "addons-014077"
	I0719 04:31:19.636748  444204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:19.636944  444204 addons.go:69] Setting storage-provisioner=true in profile "addons-014077"
	I0719 04:31:19.636953  444204 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-014077"
	I0719 04:31:19.636961  444204 addons.go:69] Setting volcano=true in profile "addons-014077"
	I0719 04:31:19.636969  444204 addons.go:69] Setting volumesnapshots=true in profile "addons-014077"
	I0719 04:31:19.659570  444204 addons.go:234] Setting addon volumesnapshots=true in "addons-014077"
	I0719 04:31:19.659633  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.660114  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.660734  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.680819  444204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-014077"
	I0719 04:31:19.681173  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.688529  444204 addons.go:234] Setting addon storage-provisioner=true in "addons-014077"
	I0719 04:31:19.688628  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.689192  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.707056  444204 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-014077"
	I0719 04:31:19.708213  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.717000  444204 mustload.go:65] Loading cluster: addons-014077
	I0719 04:31:19.717260  444204 config.go:182] Loaded profile config "addons-014077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:19.717551  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.739026  444204 addons.go:234] Setting addon volcano=true in "addons-014077"
	I0719 04:31:19.739156  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.739617  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.739949  444204 addons.go:234] Setting addon ingress=true in "addons-014077"
	I0719 04:31:19.740034  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.740466  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.771390  444204 addons.go:234] Setting addon ingress-dns=true in "addons-014077"
	I0719 04:31:19.771503  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.771979  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.802393  444204 addons.go:234] Setting addon inspektor-gadget=true in "addons-014077"
	I0719 04:31:19.802517  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.802998  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.842096  444204 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 04:31:19.850949  444204 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 04:31:19.851017  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 04:31:19.851119  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.858505  444204 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 04:31:19.860552  444204 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 04:31:19.861203  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 04:31:19.861241  444204 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 04:31:19.861425  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.862817  444204 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 04:31:19.863068  444204 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 04:31:19.863082  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 04:31:19.863144  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.865914  444204 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 04:31:19.869389  444204 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-014077"
	I0719 04:31:19.869441  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.869853  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.971092  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 04:31:19.971133  444204 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 04:31:19.971216  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.971452  444204 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 04:31:19.973473  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 04:31:19.975499  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 04:31:19.976800  444204 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 04:31:19.976829  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 04:31:19.976900  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	W0719 04:31:19.986792  444204 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0719 04:31:19.987066  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:19.988282  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 04:31:19.989514  444204 addons.go:234] Setting addon default-storageclass=true in "addons-014077"
	I0719 04:31:19.989547  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.989959  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:19.990142  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:19.991041  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:19.993262  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 04:31:19.995896  444204 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 04:31:19.996035  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:19.995833  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 04:31:20.022126  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 04:31:20.026580  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 04:31:19.995844  444204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 04:31:20.030376  444204 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:31:20.030400  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 04:31:20.034636  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.034844  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 04:31:20.036961  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 04:31:20.042596  444204 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 04:31:20.046562  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 04:31:20.046602  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 04:31:20.046691  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.062518  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 04:31:20.064855  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 04:31:20.066883  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 04:31:20.071027  444204 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 04:31:20.071053  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 04:31:20.071211  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.088740  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.090936  444204 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 04:31:20.091068  444204 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 04:31:20.093974  444204 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 04:31:20.094000  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 04:31:20.094079  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.094310  444204 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 04:31:20.094322  444204 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 04:31:20.094379  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.123060  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.139082  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.185413  444204 out.go:177]   - Using image docker.io/busybox:stable
	I0719 04:31:20.190916  444204 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 04:31:20.192772  444204 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 04:31:20.192797  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 04:31:20.192868  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.246206  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 04:31:20.274968  444204 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 04:31:20.274988  444204 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 04:31:20.275052  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:20.283424  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.289876  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.303429  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.303811  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.304807  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.331034  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.331040  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.368107  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:20.482718  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 04:31:20.518035  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 04:31:20.518061  444204 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 04:31:20.568342  444204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:31:20.573901  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 04:31:20.573932  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 04:31:20.632722  444204 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 04:31:20.632748  444204 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 04:31:20.666288  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 04:31:20.706807  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 04:31:20.706853  444204 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 04:31:20.766299  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 04:31:20.766327  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 04:31:20.769351  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 04:31:20.769375  444204 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 04:31:20.772615  444204 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 04:31:20.772638  444204 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 04:31:20.778162  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 04:31:20.832130  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 04:31:20.834712  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 04:31:20.841755  444204 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 04:31:20.841783  444204 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 04:31:20.846053  444204 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 04:31:20.846081  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 04:31:20.849270  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 04:31:20.874411  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:31:20.880707  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 04:31:20.880747  444204 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 04:31:20.929227  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 04:31:20.929253  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 04:31:20.959969  444204 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 04:31:20.959996  444204 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 04:31:20.962023  444204 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 04:31:20.962048  444204 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 04:31:20.991160  444204 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 04:31:20.991201  444204 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 04:31:21.031050  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 04:31:21.074125  444204 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 04:31:21.074157  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 04:31:21.137566  444204 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 04:31:21.137610  444204 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 04:31:21.147136  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 04:31:21.155915  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 04:31:21.155959  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 04:31:21.186084  444204 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 04:31:21.186123  444204 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 04:31:21.260889  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 04:31:21.310179  444204 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 04:31:21.310205  444204 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 04:31:21.352552  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 04:31:21.352592  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 04:31:21.358560  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 04:31:21.358586  444204 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 04:31:21.465121  444204 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 04:31:21.465161  444204 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 04:31:21.470924  444204 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 04:31:21.470950  444204 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 04:31:21.495128  444204 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 04:31:21.495160  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 04:31:21.559260  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 04:31:21.580539  444204 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 04:31:21.580566  444204 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 04:31:21.606821  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 04:31:21.606852  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 04:31:21.661159  444204 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 04:31:21.661184  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 04:31:21.686134  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 04:31:21.686177  444204 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 04:31:21.696234  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 04:31:21.733506  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 04:31:21.733532  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 04:31:21.847299  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 04:31:21.847333  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 04:31:22.013287  444204 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 04:31:22.013319  444204 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 04:31:22.177555  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 04:31:23.066419  444204 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.820173963s)
	I0719 04:31:23.066464  444204 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0719 04:31:23.067034  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.584286042s)
	I0719 04:31:23.067183  444204 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.498817727s)
	I0719 04:31:23.068782  444204 node_ready.go:35] waiting up to 6m0s for node "addons-014077" to be "Ready" ...
	I0719 04:31:23.738094  444204 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-014077" context rescaled to 1 replicas
	I0719 04:31:24.124244  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.45791764s)
	I0719 04:31:24.193565  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.415366394s)
	I0719 04:31:24.678470  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.846302782s)
	I0719 04:31:25.080363  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:25.673659  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.838900366s)
	I0719 04:31:25.673702  444204 addons.go:475] Verifying addon ingress=true in "addons-014077"
	I0719 04:31:25.674074  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.642991932s)
	I0719 04:31:25.674115  444204 addons.go:475] Verifying addon registry=true in "addons-014077"
	I0719 04:31:25.673858  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.824466928s)
	I0719 04:31:25.673899  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.79946494s)
	I0719 04:31:25.674480  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.52726484s)
	I0719 04:31:25.674498  444204 addons.go:475] Verifying addon metrics-server=true in "addons-014077"
	I0719 04:31:25.674542  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.413620715s)
	I0719 04:31:25.676987  444204 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-014077 service yakd-dashboard -n yakd-dashboard
	
	I0719 04:31:25.677072  444204 out.go:177] * Verifying ingress addon...
	I0719 04:31:25.677173  444204 out.go:177] * Verifying registry addon...
	I0719 04:31:25.680746  444204 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 04:31:25.681002  444204 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 04:31:25.701097  444204 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 04:31:25.701125  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:25.702111  444204 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 04:31:25.702131  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:25.749746  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.053464222s)
	I0719 04:31:25.749956  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.190660762s)
	W0719 04:31:25.749985  444204 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 04:31:25.750032  444204 retry.go:31] will retry after 134.473484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 04:31:25.884951  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 04:31:26.160613  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.983009465s)
	I0719 04:31:26.160660  444204 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-014077"
	I0719 04:31:26.163073  444204 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 04:31:26.166160  444204 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 04:31:26.199943  444204 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 04:31:26.199970  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:26.211012  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:26.224596  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:26.670730  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:26.685965  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:26.686339  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:27.176659  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:27.189652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:27.190611  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:27.572297  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:27.671255  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:27.686614  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:27.688627  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:28.173319  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:28.188339  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:28.189992  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:28.628653  444204 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 04:31:28.628790  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:28.673398  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:28.691096  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:28.694550  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:28.695546  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:28.921175  444204 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 04:31:29.000074  444204 addons.go:234] Setting addon gcp-auth=true in "addons-014077"
	I0719 04:31:29.000129  444204 host.go:66] Checking if "addons-014077" exists ...
	I0719 04:31:29.000572  444204 cli_runner.go:164] Run: docker container inspect addons-014077 --format={{.State.Status}}
	I0719 04:31:29.004406  444204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.119389799s)
	I0719 04:31:29.031863  444204 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 04:31:29.031920  444204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-014077
	I0719 04:31:29.053955  444204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/addons-014077/id_rsa Username:docker}
	I0719 04:31:29.163868  444204 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 04:31:29.165491  444204 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 04:31:29.167050  444204 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 04:31:29.167073  444204 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 04:31:29.172034  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:29.189118  444204 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 04:31:29.189147  444204 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 04:31:29.192397  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:29.193720  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:29.213516  444204 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 04:31:29.213585  444204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 04:31:29.234009  444204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 04:31:29.572496  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:29.673132  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:29.690127  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:29.692343  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:29.966037  444204 addons.go:475] Verifying addon gcp-auth=true in "addons-014077"
	I0719 04:31:29.968164  444204 out.go:177] * Verifying gcp-auth addon...
	I0719 04:31:29.971243  444204 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 04:31:29.981311  444204 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 04:31:29.981378  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:30.172106  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:30.187836  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:30.189147  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:30.475285  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:30.671186  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:30.686234  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:30.686367  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:30.974959  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:31.171067  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:31.186500  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:31.186718  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:31.477477  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:31.572808  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:31.671416  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:31.694835  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:31.696610  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:31.976082  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:32.172030  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:32.188604  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:32.190732  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:32.476049  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:32.671240  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:32.685553  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:32.686601  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:32.975315  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:33.172642  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:33.186557  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:33.187871  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:33.475198  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:33.670383  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:33.685676  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:33.686175  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:33.975413  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:34.072323  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:34.170653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:34.186158  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:34.186633  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:34.474687  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:34.671148  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:34.685688  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:34.686641  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:34.975210  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:35.171158  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:35.185858  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:35.187051  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:35.475285  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:35.671349  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:35.687987  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:35.689404  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:35.975490  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:36.073345  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:36.171011  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:36.185441  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:36.185973  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:36.475258  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:36.671273  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:36.685685  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:36.686334  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:36.974755  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:37.171058  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:37.185418  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:37.186186  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:37.475601  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:37.671085  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:37.685523  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:37.686759  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:37.974814  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:38.170713  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:38.185459  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:38.187051  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:38.475349  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:38.572402  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:38.671475  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:38.685768  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:38.686712  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:38.975135  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:39.172662  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:39.186096  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:39.186603  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:39.475227  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:39.671732  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:39.686152  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:39.686462  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:39.974833  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:40.171074  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:40.186033  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:40.186701  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:40.474600  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:40.670822  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:40.685402  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:40.687345  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:40.974652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:41.073102  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:41.170595  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:41.185017  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:41.187038  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:41.475380  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:41.670837  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:41.685769  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:41.686354  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:41.975375  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:42.171463  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:42.186986  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:42.188134  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:42.475060  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:42.670990  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:42.685680  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:42.686307  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:42.974616  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:43.172153  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:43.186216  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:43.187016  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:43.474476  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:43.572086  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:43.671023  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:43.685700  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:43.686363  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:43.975089  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:44.171051  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:44.186392  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:44.186995  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:44.474279  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:44.670452  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:44.685439  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:44.686018  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:44.974997  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:45.171876  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:45.186309  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:45.186482  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:45.475487  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:45.572167  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:45.670931  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:45.685988  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:45.686967  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:45.974511  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:46.170929  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:46.185834  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:46.186260  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:46.475391  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:46.670830  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:46.686061  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:46.686965  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:46.974395  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:47.170898  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:47.185884  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:47.186189  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:47.474373  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:47.575206  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:47.670342  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:47.684897  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:47.685714  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:47.975519  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:48.171084  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:48.185357  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:48.186046  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:48.474252  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:48.669834  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:48.689346  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:48.690682  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:48.974947  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:49.170733  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:49.184815  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:49.185711  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:49.474758  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:49.670219  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:49.687441  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:49.687948  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:49.975523  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:50.072351  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:50.171203  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:50.186321  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:50.186688  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:50.475356  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:50.670747  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:50.685613  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:50.686507  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:50.975744  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:51.170961  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:51.185672  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:51.187085  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:51.474699  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:51.670752  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:51.685216  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:51.686133  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:51.975016  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:52.073140  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:52.170732  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:52.185831  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:52.187321  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:52.474501  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:52.670916  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:52.685761  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:52.687130  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:52.974582  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:53.172596  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:53.185854  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:53.187059  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:53.475999  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:53.671022  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:53.686914  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:53.688722  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:53.975331  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:54.170701  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:54.184626  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:54.186575  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:54.475103  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:54.572223  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:54.670130  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:54.686363  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:54.686664  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:54.976853  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:55.170359  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:55.189113  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:55.195369  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:55.474680  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:55.670604  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:55.686300  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:55.687309  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:55.975174  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:56.170576  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:56.186881  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:56.187176  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:56.474797  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:56.671203  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:56.685006  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:56.685884  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:56.975006  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:57.072382  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:57.170855  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:57.184869  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:57.186026  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:57.475595  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:57.671163  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:57.685766  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:57.685981  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:57.974333  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:58.170328  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:58.187133  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:58.188423  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:58.474517  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:58.670846  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:58.684881  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:58.685617  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:58.975367  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:59.171271  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:59.185725  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:59.187037  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:59.474839  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:31:59.572272  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:31:59.670499  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:31:59.687101  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:31:59.687564  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:31:59.975302  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:00.181087  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:00.203533  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:00.209010  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:00.475904  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:00.670465  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:00.685728  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:00.686666  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:00.975381  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:01.170740  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:01.186071  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:01.186360  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:01.474469  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:01.572380  444204 node_ready.go:53] node "addons-014077" has status "Ready":"False"
	I0719 04:32:01.670712  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:01.685351  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:01.686108  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:01.975406  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:02.170653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:02.185705  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:02.186053  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:02.474679  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:02.670564  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:02.685696  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:02.686589  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:02.975931  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:03.175593  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:03.186830  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:03.186989  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:03.486653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:03.664331  444204 node_ready.go:49] node "addons-014077" has status "Ready":"True"
	I0719 04:32:03.664353  444204 node_ready.go:38] duration metric: took 40.595537445s for node "addons-014077" to be "Ready" ...
	I0719 04:32:03.664365  444204 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:32:03.700353  444204 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 04:32:03.700380  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:03.705051  444204 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p5jz6" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:03.717069  444204 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 04:32:03.717093  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:03.718338  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:04.007592  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:04.219056  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:04.219742  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:04.220637  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:04.475180  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:04.672108  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:04.685976  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:04.688454  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:04.711572  444204 pod_ready.go:92] pod "coredns-7db6d8ff4d-p5jz6" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.711598  444204 pod_ready.go:81] duration metric: took 1.006515944s for pod "coredns-7db6d8ff4d-p5jz6" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.711646  444204 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.717108  444204 pod_ready.go:92] pod "etcd-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.717185  444204 pod_ready.go:81] duration metric: took 5.522554ms for pod "etcd-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.717206  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.722851  444204 pod_ready.go:92] pod "kube-apiserver-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.722879  444204 pod_ready.go:81] duration metric: took 5.663047ms for pod "kube-apiserver-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.722892  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.728055  444204 pod_ready.go:92] pod "kube-controller-manager-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.728133  444204 pod_ready.go:81] duration metric: took 5.207058ms for pod "kube-controller-manager-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.728155  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hqgw8" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.773861  444204 pod_ready.go:92] pod "kube-proxy-hqgw8" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:04.773886  444204 pod_ready.go:81] duration metric: took 45.722023ms for pod "kube-proxy-hqgw8" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.773898  444204 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:04.975208  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:05.171958  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:05.175925  444204 pod_ready.go:92] pod "kube-scheduler-addons-014077" in "kube-system" namespace has status "Ready":"True"
	I0719 04:32:05.176004  444204 pod_ready.go:81] duration metric: took 402.096478ms for pod "kube-scheduler-addons-014077" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:05.176039  444204 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace to be "Ready" ...
	I0719 04:32:05.196083  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:05.201273  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:05.475766  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:05.673231  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:05.688906  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:05.690564  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:05.978331  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:06.172188  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:06.189301  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:06.190633  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:06.474900  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:06.672183  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:06.697273  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:06.698638  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:06.974816  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:07.181393  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:07.196306  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:07.197943  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:07.198687  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:07.477110  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:07.672535  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:07.689409  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:07.695601  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:07.975168  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:08.172417  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:08.193838  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:08.194498  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:08.475786  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:08.673812  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:08.690584  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:08.691525  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:08.976018  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:09.173153  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:09.201948  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:09.203345  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:09.206461  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:09.474982  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:09.689686  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:09.706348  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:09.731056  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:09.975645  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:10.178040  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:10.188768  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:10.190277  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:10.476072  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:10.674059  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:10.688334  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:10.689230  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:10.975586  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:11.171629  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:11.186673  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:11.188677  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:11.493388  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:11.672393  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:11.690332  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:11.697521  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:11.706562  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:11.975453  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:12.172969  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:12.194331  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:12.197964  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:12.475339  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:12.671645  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:12.687356  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:12.687849  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:12.975480  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:13.172324  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:13.187897  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:13.189967  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:13.474968  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:13.671970  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:13.686377  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:13.687296  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:13.975121  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:14.172069  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:14.182836  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:14.186318  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:14.188053  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:14.475632  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:14.673727  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:14.715264  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:14.716131  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:14.975382  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:15.173731  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:15.187276  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:15.189177  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:15.475525  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:15.673009  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:15.686938  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:15.689635  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:15.975627  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:16.171547  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:16.187635  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:16.188064  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:16.475096  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:16.671622  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:16.682052  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:16.686556  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:16.686832  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:16.975271  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:17.172502  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:17.186559  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:17.186635  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:17.475636  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:17.683110  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:17.711713  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:17.717143  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:17.974786  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:18.174725  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:18.193186  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:18.195210  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:18.475225  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:18.680248  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:18.696799  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:18.701609  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:18.702614  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:18.975906  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:19.173461  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:19.186791  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:19.187680  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:19.474497  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:19.676562  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:19.686683  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:19.689770  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:19.975957  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:20.173332  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:20.186767  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:20.187977  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:20.478519  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:20.687199  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:20.691361  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:20.692154  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:20.975949  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:21.172029  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:21.189432  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:21.192966  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:21.193533  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:21.475304  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:21.672201  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:21.686170  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:21.687748  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:21.974902  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:22.172522  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:22.187796  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:22.188401  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:22.475222  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:22.672808  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:22.686156  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:22.687876  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:22.977249  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:23.173806  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:23.200728  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:23.201970  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:23.202710  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:23.475891  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:23.673135  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:23.695801  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:23.697879  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:23.976839  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:24.175299  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:24.190344  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:24.191951  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:24.476591  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:24.673257  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:24.691674  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:24.692210  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:24.984959  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:25.172486  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:25.188584  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:25.189485  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:25.475268  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:25.672867  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:25.695574  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:25.705613  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:25.706702  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:25.975461  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:26.178461  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:26.191494  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:26.191964  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:26.475568  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:26.672178  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:26.691298  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:26.692085  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:26.974489  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:27.173533  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:27.188758  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:27.190633  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:27.483893  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:27.672582  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:27.694627  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:27.696916  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:27.975637  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:28.172532  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:28.187782  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:28.193526  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:28.194401  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:28.474893  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:28.671836  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:28.686228  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:28.687441  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:28.975034  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:29.172379  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:29.187783  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:29.188434  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:29.475992  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:29.674028  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:29.689996  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:29.690923  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:29.974973  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:30.171708  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:30.198904  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:30.199581  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:30.201608  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:30.475336  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:30.678243  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:30.697781  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:30.699109  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:30.976850  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:31.173745  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:31.187309  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:31.188981  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:31.474661  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:31.674330  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:31.693066  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:31.694414  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:31.975025  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:32.172735  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:32.194038  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:32.195310  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:32.474738  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:32.671564  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:32.682539  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:32.687793  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:32.688903  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:32.975148  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:33.171612  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:33.192111  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:33.192653  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:33.475390  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:33.682961  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:33.690560  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 04:32:33.692170  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:33.975488  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:34.171965  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:34.186841  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:34.187192  444204 kapi.go:107] duration metric: took 1m8.506188753s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 04:32:34.475344  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:34.672392  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:34.684858  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:34.975454  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:35.172611  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:35.184865  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:35.185351  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:35.480249  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:35.672251  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:35.686128  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:35.974994  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:36.171857  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:36.185825  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:36.477275  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:36.673573  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:36.719326  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:36.974553  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:37.172763  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:37.197380  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:37.209004  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:37.475687  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:37.672696  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:37.686095  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:37.977652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:38.175554  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:38.186037  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:38.474524  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:38.672146  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:38.685760  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:38.975780  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:39.173165  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:39.186201  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:39.476054  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:39.672378  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:39.690106  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:39.697254  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:39.975369  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:40.172887  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:40.195822  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:40.475124  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:40.675444  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:40.691419  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:40.975667  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:41.171853  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:41.186031  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:41.475509  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:41.671395  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:41.685217  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:41.974919  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:42.173638  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:42.184489  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:42.186502  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:42.475509  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:42.689381  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:42.704127  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:42.975628  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:43.173669  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:43.188607  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:43.475462  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:43.672289  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:43.687116  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:43.975316  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:44.173790  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:44.188860  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:44.190326  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:44.484275  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:44.673023  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:44.689136  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:44.975357  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:45.176867  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:45.187695  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:45.478006  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:45.674022  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:45.690978  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:45.975751  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:46.177066  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:46.194731  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:46.195313  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:46.519724  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:46.677612  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:46.706259  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:46.975807  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:47.184243  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:47.194708  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:47.476416  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:47.674545  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:47.701203  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:47.975051  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:48.181293  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:48.192510  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:48.196018  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:48.476405  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:48.672068  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:48.686716  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:48.976789  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:49.174019  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:49.203663  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:49.475822  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:49.673038  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:49.687935  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:49.976155  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:50.172375  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:50.187524  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:50.476122  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:50.672206  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:50.683106  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:50.686686  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:50.976128  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:51.173957  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:51.197114  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:51.476952  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:51.672324  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:51.686924  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:51.975283  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:52.171574  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:52.185509  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:52.475045  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:52.671496  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:52.685149  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:52.975419  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:53.171610  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:53.183170  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:53.186069  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:53.474652  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:53.681260  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:53.685844  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:53.984580  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:54.172016  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:54.190313  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:54.474660  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:54.671737  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:54.686555  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:54.975586  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:55.172298  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:55.186775  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:55.189082  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:55.475603  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:55.687358  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:55.691876  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:55.975448  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:56.186694  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:56.213831  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:56.476919  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 04:32:56.672046  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:56.686532  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:56.982501  444204 kapi.go:107] duration metric: took 1m27.011255242s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 04:32:56.993883  444204 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-014077 cluster.
	I0719 04:32:57.006291  444204 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 04:32:57.018117  444204 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 04:32:57.172688  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:57.189510  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:57.675132  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:57.683892  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:32:57.686923  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:58.181132  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:58.186903  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:58.672020  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:58.702384  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:59.172204  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:59.187303  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:32:59.671572  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:32:59.685194  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:00.226954  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:00.258245  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:00.258410  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:00.675705  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:00.688306  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:01.173308  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:01.187508  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:01.672976  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:01.690557  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:02.172439  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:02.200876  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:02.671870  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:02.688347  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:02.689434  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:03.173701  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:03.187122  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:03.672165  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:03.688226  444204 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 04:33:04.171545  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:04.185241  444204 kapi.go:107] duration metric: took 1m38.504496682s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 04:33:04.674045  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:05.173041  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:05.183095  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:05.673220  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:06.172257  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:06.671659  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:07.174602  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:07.183993  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:07.671416  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:08.171544  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:08.671240  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:09.172390  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:09.673382  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:09.684419  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:10.172594  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:10.671781  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:11.172955  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:11.672600  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:12.190154  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:12.193770  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:12.671545  444204 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 04:33:13.172513  444204 kapi.go:107] duration metric: took 1m47.006352326s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 04:33:13.176375  444204 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, storage-provisioner, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0719 04:33:13.177945  444204 addons.go:510] duration metric: took 1m53.567017393s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher storage-provisioner metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0719 04:33:14.682250  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:17.182498  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:19.681980  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:21.682407  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:24.182980  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:26.682088  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:29.182494  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:31.682658  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:34.182197  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:36.182902  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:38.682277  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:40.684701  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:43.182681  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:45.184012  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:47.682643  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:49.682786  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:52.182420  444204 pod_ready.go:102] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"False"
	I0719 04:33:54.182213  444204 pod_ready.go:92] pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace has status "Ready":"True"
	I0719 04:33:54.182243  444204 pod_ready.go:81] duration metric: took 1m49.006167639s for pod "metrics-server-c59844bb4-6s6pb" in "kube-system" namespace to be "Ready" ...
	I0719 04:33:54.182256  444204 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ms7rm" in "kube-system" namespace to be "Ready" ...
	I0719 04:33:54.188094  444204 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ms7rm" in "kube-system" namespace has status "Ready":"True"
	I0719 04:33:54.188118  444204 pod_ready.go:81] duration metric: took 5.85478ms for pod "nvidia-device-plugin-daemonset-ms7rm" in "kube-system" namespace to be "Ready" ...
	I0719 04:33:54.188139  444204 pod_ready.go:38] duration metric: took 1m50.523723488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:33:54.188155  444204 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:33:54.188660  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 04:33:54.188767  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 04:33:54.248031  444204 cri.go:89] found id: "fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:33:54.248057  444204 cri.go:89] found id: ""
	I0719 04:33:54.248065  444204 logs.go:276] 1 containers: [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a]
	I0719 04:33:54.248120  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.251986  444204 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 04:33:54.252058  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 04:33:54.290232  444204 cri.go:89] found id: "118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:33:54.290256  444204 cri.go:89] found id: ""
	I0719 04:33:54.290264  444204 logs.go:276] 1 containers: [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6]
	I0719 04:33:54.290329  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.294033  444204 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 04:33:54.294112  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 04:33:54.337967  444204 cri.go:89] found id: "47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:33:54.338039  444204 cri.go:89] found id: ""
	I0719 04:33:54.338063  444204 logs.go:276] 1 containers: [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563]
	I0719 04:33:54.338152  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.341838  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 04:33:54.341907  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 04:33:54.384611  444204 cri.go:89] found id: "23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:33:54.384634  444204 cri.go:89] found id: ""
	I0719 04:33:54.384643  444204 logs.go:276] 1 containers: [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1]
	I0719 04:33:54.384732  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.388040  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 04:33:54.388107  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 04:33:54.427893  444204 cri.go:89] found id: "e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:33:54.427916  444204 cri.go:89] found id: ""
	I0719 04:33:54.427924  444204 logs.go:276] 1 containers: [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57]
	I0719 04:33:54.427978  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.431234  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 04:33:54.431301  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 04:33:54.472804  444204 cri.go:89] found id: "6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:33:54.472837  444204 cri.go:89] found id: ""
	I0719 04:33:54.472847  444204 logs.go:276] 1 containers: [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c]
	I0719 04:33:54.472903  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.476173  444204 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 04:33:54.476236  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 04:33:54.511291  444204 cri.go:89] found id: "1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:33:54.511314  444204 cri.go:89] found id: ""
	I0719 04:33:54.511323  444204 logs.go:276] 1 containers: [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5]
	I0719 04:33:54.511376  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:33:54.515274  444204 logs.go:123] Gathering logs for describe nodes ...
	I0719 04:33:54.515315  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 04:33:54.685688  444204 logs.go:123] Gathering logs for kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] ...
	I0719 04:33:54.685719  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:33:54.732513  444204 logs.go:123] Gathering logs for kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] ...
	I0719 04:33:54.732545  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:33:54.769780  444204 logs.go:123] Gathering logs for kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] ...
	I0719 04:33:54.769810  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:33:54.820578  444204 logs.go:123] Gathering logs for container status ...
	I0719 04:33:54.820617  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 04:33:54.874252  444204 logs.go:123] Gathering logs for kubelet ...
	I0719 04:33:54.874285  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 04:33:54.919475  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:33:54.919725  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:33:54.975312  444204 logs.go:123] Gathering logs for kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] ...
	I0719 04:33:54.975349  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:33:55.049477  444204 logs.go:123] Gathering logs for etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] ...
	I0719 04:33:55.049524  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:33:55.102624  444204 logs.go:123] Gathering logs for coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] ...
	I0719 04:33:55.102659  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:33:55.153034  444204 logs.go:123] Gathering logs for kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] ...
	I0719 04:33:55.153068  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:33:55.222884  444204 logs.go:123] Gathering logs for CRI-O ...
	I0719 04:33:55.222920  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 04:33:55.323112  444204 logs.go:123] Gathering logs for dmesg ...
	I0719 04:33:55.323146  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 04:33:55.345360  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:33:55.345389  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 04:33:55.345482  444204 out.go:239] X Problems detected in kubelet:
	W0719 04:33:55.345516  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:33:55.345538  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:33:55.345564  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:33:55.345571  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:05.346596  444204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:34:05.361344  444204 api_server.go:72] duration metric: took 2m45.75145707s to wait for apiserver process to appear ...
	I0719 04:34:05.361371  444204 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:34:05.361409  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 04:34:05.361469  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 04:34:05.400030  444204 cri.go:89] found id: "fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:05.400050  444204 cri.go:89] found id: ""
	I0719 04:34:05.400058  444204 logs.go:276] 1 containers: [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a]
	I0719 04:34:05.400113  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.404052  444204 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 04:34:05.404127  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 04:34:05.443370  444204 cri.go:89] found id: "118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:05.443401  444204 cri.go:89] found id: ""
	I0719 04:34:05.443409  444204 logs.go:276] 1 containers: [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6]
	I0719 04:34:05.443469  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.447043  444204 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 04:34:05.447114  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 04:34:05.485326  444204 cri.go:89] found id: "47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:05.485347  444204 cri.go:89] found id: ""
	I0719 04:34:05.485354  444204 logs.go:276] 1 containers: [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563]
	I0719 04:34:05.485408  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.488785  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 04:34:05.488855  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 04:34:05.528071  444204 cri.go:89] found id: "23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:05.528098  444204 cri.go:89] found id: ""
	I0719 04:34:05.528106  444204 logs.go:276] 1 containers: [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1]
	I0719 04:34:05.528174  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.531868  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 04:34:05.531955  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 04:34:05.573356  444204 cri.go:89] found id: "e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:05.573382  444204 cri.go:89] found id: ""
	I0719 04:34:05.573401  444204 logs.go:276] 1 containers: [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57]
	I0719 04:34:05.573456  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.576846  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 04:34:05.576916  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 04:34:05.614907  444204 cri.go:89] found id: "6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:05.614931  444204 cri.go:89] found id: ""
	I0719 04:34:05.614939  444204 logs.go:276] 1 containers: [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c]
	I0719 04:34:05.614997  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.618461  444204 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 04:34:05.618535  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 04:34:05.663804  444204 cri.go:89] found id: "1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:05.663827  444204 cri.go:89] found id: ""
	I0719 04:34:05.663835  444204 logs.go:276] 1 containers: [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5]
	I0719 04:34:05.663890  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:05.667396  444204 logs.go:123] Gathering logs for etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] ...
	I0719 04:34:05.667422  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:05.710742  444204 logs.go:123] Gathering logs for coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] ...
	I0719 04:34:05.710774  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:05.758526  444204 logs.go:123] Gathering logs for kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] ...
	I0719 04:34:05.758558  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:05.835541  444204 logs.go:123] Gathering logs for kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] ...
	I0719 04:34:05.835587  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:05.891932  444204 logs.go:123] Gathering logs for container status ...
	I0719 04:34:05.892018  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 04:34:05.961379  444204 logs.go:123] Gathering logs for kubelet ...
	I0719 04:34:05.961410  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 04:34:06.001460  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:06.001679  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:06.056573  444204 logs.go:123] Gathering logs for dmesg ...
	I0719 04:34:06.056614  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 04:34:06.081171  444204 logs.go:123] Gathering logs for describe nodes ...
	I0719 04:34:06.081252  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 04:34:06.252976  444204 logs.go:123] Gathering logs for kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] ...
	I0719 04:34:06.253007  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:06.344247  444204 logs.go:123] Gathering logs for kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] ...
	I0719 04:34:06.344278  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:06.395847  444204 logs.go:123] Gathering logs for kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] ...
	I0719 04:34:06.395881  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:06.441739  444204 logs.go:123] Gathering logs for CRI-O ...
	I0719 04:34:06.441765  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 04:34:06.541913  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:06.541948  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 04:34:06.542015  444204 out.go:239] X Problems detected in kubelet:
	W0719 04:34:06.542025  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:06.542032  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:06.542040  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:06.542046  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:16.543965  444204 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0719 04:34:16.553400  444204 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0719 04:34:16.555363  444204 api_server.go:141] control plane version: v1.30.3
	I0719 04:34:16.555390  444204 api_server.go:131] duration metric: took 11.194010227s to wait for apiserver health ...
	I0719 04:34:16.555400  444204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:34:16.555422  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 04:34:16.555511  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 04:34:16.596016  444204 cri.go:89] found id: "fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:16.596040  444204 cri.go:89] found id: ""
	I0719 04:34:16.596048  444204 logs.go:276] 1 containers: [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a]
	I0719 04:34:16.596116  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.599885  444204 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 04:34:16.599964  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 04:34:16.643995  444204 cri.go:89] found id: "118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:16.644016  444204 cri.go:89] found id: ""
	I0719 04:34:16.644024  444204 logs.go:276] 1 containers: [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6]
	I0719 04:34:16.644081  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.647681  444204 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 04:34:16.647754  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 04:34:16.691988  444204 cri.go:89] found id: "47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:16.692021  444204 cri.go:89] found id: ""
	I0719 04:34:16.692030  444204 logs.go:276] 1 containers: [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563]
	I0719 04:34:16.692128  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.695927  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 04:34:16.696014  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 04:34:16.738811  444204 cri.go:89] found id: "23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:16.738834  444204 cri.go:89] found id: ""
	I0719 04:34:16.738842  444204 logs.go:276] 1 containers: [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1]
	I0719 04:34:16.738896  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.742478  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 04:34:16.742553  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 04:34:16.791542  444204 cri.go:89] found id: "e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:16.791566  444204 cri.go:89] found id: ""
	I0719 04:34:16.791574  444204 logs.go:276] 1 containers: [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57]
	I0719 04:34:16.791634  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.795216  444204 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 04:34:16.795325  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 04:34:16.836873  444204 cri.go:89] found id: "6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:16.836898  444204 cri.go:89] found id: ""
	I0719 04:34:16.836906  444204 logs.go:276] 1 containers: [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c]
	I0719 04:34:16.836966  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.840535  444204 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 04:34:16.840621  444204 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 04:34:16.879460  444204 cri.go:89] found id: "1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:16.879485  444204 cri.go:89] found id: ""
	I0719 04:34:16.879494  444204 logs.go:276] 1 containers: [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5]
	I0719 04:34:16.879566  444204 ssh_runner.go:195] Run: which crictl
	I0719 04:34:16.883276  444204 logs.go:123] Gathering logs for kubelet ...
	I0719 04:34:16.883346  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 04:34:16.919898  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:16.920118  444204 logs.go:138] Found kubelet problem: Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:16.975358  444204 logs.go:123] Gathering logs for dmesg ...
	I0719 04:34:16.975392  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 04:34:16.999942  444204 logs.go:123] Gathering logs for coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] ...
	I0719 04:34:16.999971  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563"
	I0719 04:34:17.053700  444204 logs.go:123] Gathering logs for kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] ...
	I0719 04:34:17.053777  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57"
	I0719 04:34:17.100827  444204 logs.go:123] Gathering logs for kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] ...
	I0719 04:34:17.100855  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c"
	I0719 04:34:17.189836  444204 logs.go:123] Gathering logs for kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] ...
	I0719 04:34:17.189875  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5"
	I0719 04:34:17.242023  444204 logs.go:123] Gathering logs for describe nodes ...
	I0719 04:34:17.242058  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 04:34:17.387050  444204 logs.go:123] Gathering logs for kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] ...
	I0719 04:34:17.387082  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a"
	I0719 04:34:17.460157  444204 logs.go:123] Gathering logs for etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] ...
	I0719 04:34:17.460193  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6"
	I0719 04:34:17.504208  444204 logs.go:123] Gathering logs for kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] ...
	I0719 04:34:17.504242  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1"
	I0719 04:34:17.550618  444204 logs.go:123] Gathering logs for CRI-O ...
	I0719 04:34:17.550654  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 04:34:17.654806  444204 logs.go:123] Gathering logs for container status ...
	I0719 04:34:17.654847  444204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 04:34:17.725438  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:17.725465  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 04:34:17.725525  444204 out.go:239] X Problems detected in kubelet:
	W0719 04:34:17.725536  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: W0719 04:31:24.926175    1519 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	W0719 04:34:17.725543  444204 out.go:239]   Jul 19 04:31:24 addons-014077 kubelet[1519]: E0719 04:31:24.926220    1519 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-014077" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-014077' and this object
	I0719 04:34:17.725555  444204 out.go:304] Setting ErrFile to fd 2...
	I0719 04:34:17.725560  444204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:34:27.739616  444204 system_pods.go:59] 18 kube-system pods found
	I0719 04:34:27.739664  444204 system_pods.go:61] "coredns-7db6d8ff4d-p5jz6" [15f7a070-c1b9-4bdf-b3e6-0828da1ed028] Running
	I0719 04:34:27.739673  444204 system_pods.go:61] "csi-hostpath-attacher-0" [53d3a7de-f014-498f-88ce-c40414300150] Running
	I0719 04:34:27.739678  444204 system_pods.go:61] "csi-hostpath-resizer-0" [d1212fbe-5414-4bdd-a087-175a8236eac6] Running
	I0719 04:34:27.739682  444204 system_pods.go:61] "csi-hostpathplugin-5d84n" [750fc3d6-98b7-4797-a45a-d7849e59a8d8] Running
	I0719 04:34:27.739686  444204 system_pods.go:61] "etcd-addons-014077" [a27153bf-4d84-4f18-8fcd-4a9affe7e283] Running
	I0719 04:34:27.739696  444204 system_pods.go:61] "kindnet-dl4zb" [abdce223-a996-4835-a45e-976b8f051491] Running
	I0719 04:34:27.739717  444204 system_pods.go:61] "kube-apiserver-addons-014077" [0c35929d-2ec9-4ac3-aafe-fc4200ca09ae] Running
	I0719 04:34:27.739726  444204 system_pods.go:61] "kube-controller-manager-addons-014077" [80243f6d-0af4-4084-87fa-39acd70e093c] Running
	I0719 04:34:27.739730  444204 system_pods.go:61] "kube-ingress-dns-minikube" [788540eb-98b8-425c-bd6d-5c74eede8836] Running
	I0719 04:34:27.739734  444204 system_pods.go:61] "kube-proxy-hqgw8" [937c4d03-e6ea-4410-83c1-f3637a52e19d] Running
	I0719 04:34:27.739738  444204 system_pods.go:61] "kube-scheduler-addons-014077" [a60f26d2-5c8f-4b6d-9e80-3435f40ff60c] Running
	I0719 04:34:27.739744  444204 system_pods.go:61] "metrics-server-c59844bb4-6s6pb" [f1e51548-a1be-4356-a620-a46631404c83] Running
	I0719 04:34:27.739748  444204 system_pods.go:61] "nvidia-device-plugin-daemonset-ms7rm" [e10fa14c-5d6e-4792-ba1d-e37851cd7388] Running
	I0719 04:34:27.739755  444204 system_pods.go:61] "registry-656c9c8d9c-99psj" [267507bb-055e-4065-8138-ce3d5f7e0457] Running
	I0719 04:34:27.739759  444204 system_pods.go:61] "registry-proxy-b99sl" [4ee1a72b-b280-4382-82d9-43f79c251273] Running
	I0719 04:34:27.739768  444204 system_pods.go:61] "snapshot-controller-745499f584-5s7lv" [29a87517-28b9-4196-a5db-c8e88ea6fe02] Running
	I0719 04:34:27.739777  444204 system_pods.go:61] "snapshot-controller-745499f584-pfjv7" [2b852f66-d708-44a2-8284-2e07ed87e747] Running
	I0719 04:34:27.739781  444204 system_pods.go:61] "storage-provisioner" [62592988-dc48-43c1-9c20-802f2cb10103] Running
	I0719 04:34:27.739791  444204 system_pods.go:74] duration metric: took 11.184381133s to wait for pod list to return data ...
	I0719 04:34:27.739810  444204 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:34:27.742361  444204 default_sa.go:45] found service account: "default"
	I0719 04:34:27.742387  444204 default_sa.go:55] duration metric: took 2.571092ms for default service account to be created ...
	I0719 04:34:27.742397  444204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:34:27.753545  444204 system_pods.go:86] 18 kube-system pods found
	I0719 04:34:27.753583  444204 system_pods.go:89] "coredns-7db6d8ff4d-p5jz6" [15f7a070-c1b9-4bdf-b3e6-0828da1ed028] Running
	I0719 04:34:27.753592  444204 system_pods.go:89] "csi-hostpath-attacher-0" [53d3a7de-f014-498f-88ce-c40414300150] Running
	I0719 04:34:27.753597  444204 system_pods.go:89] "csi-hostpath-resizer-0" [d1212fbe-5414-4bdd-a087-175a8236eac6] Running
	I0719 04:34:27.753602  444204 system_pods.go:89] "csi-hostpathplugin-5d84n" [750fc3d6-98b7-4797-a45a-d7849e59a8d8] Running
	I0719 04:34:27.753608  444204 system_pods.go:89] "etcd-addons-014077" [a27153bf-4d84-4f18-8fcd-4a9affe7e283] Running
	I0719 04:34:27.753613  444204 system_pods.go:89] "kindnet-dl4zb" [abdce223-a996-4835-a45e-976b8f051491] Running
	I0719 04:34:27.753617  444204 system_pods.go:89] "kube-apiserver-addons-014077" [0c35929d-2ec9-4ac3-aafe-fc4200ca09ae] Running
	I0719 04:34:27.753621  444204 system_pods.go:89] "kube-controller-manager-addons-014077" [80243f6d-0af4-4084-87fa-39acd70e093c] Running
	I0719 04:34:27.753626  444204 system_pods.go:89] "kube-ingress-dns-minikube" [788540eb-98b8-425c-bd6d-5c74eede8836] Running
	I0719 04:34:27.753630  444204 system_pods.go:89] "kube-proxy-hqgw8" [937c4d03-e6ea-4410-83c1-f3637a52e19d] Running
	I0719 04:34:27.753634  444204 system_pods.go:89] "kube-scheduler-addons-014077" [a60f26d2-5c8f-4b6d-9e80-3435f40ff60c] Running
	I0719 04:34:27.753645  444204 system_pods.go:89] "metrics-server-c59844bb4-6s6pb" [f1e51548-a1be-4356-a620-a46631404c83] Running
	I0719 04:34:27.753649  444204 system_pods.go:89] "nvidia-device-plugin-daemonset-ms7rm" [e10fa14c-5d6e-4792-ba1d-e37851cd7388] Running
	I0719 04:34:27.753661  444204 system_pods.go:89] "registry-656c9c8d9c-99psj" [267507bb-055e-4065-8138-ce3d5f7e0457] Running
	I0719 04:34:27.753665  444204 system_pods.go:89] "registry-proxy-b99sl" [4ee1a72b-b280-4382-82d9-43f79c251273] Running
	I0719 04:34:27.753673  444204 system_pods.go:89] "snapshot-controller-745499f584-5s7lv" [29a87517-28b9-4196-a5db-c8e88ea6fe02] Running
	I0719 04:34:27.753677  444204 system_pods.go:89] "snapshot-controller-745499f584-pfjv7" [2b852f66-d708-44a2-8284-2e07ed87e747] Running
	I0719 04:34:27.753681  444204 system_pods.go:89] "storage-provisioner" [62592988-dc48-43c1-9c20-802f2cb10103] Running
	I0719 04:34:27.753689  444204 system_pods.go:126] duration metric: took 11.284998ms to wait for k8s-apps to be running ...
	I0719 04:34:27.753707  444204 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:34:27.753787  444204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:34:27.767156  444204 system_svc.go:56] duration metric: took 13.441102ms WaitForService to wait for kubelet
	I0719 04:34:27.767184  444204 kubeadm.go:582] duration metric: took 3m8.157302198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:34:27.767204  444204 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:34:27.770310  444204 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0719 04:34:27.770341  444204 node_conditions.go:123] node cpu capacity is 2
	I0719 04:34:27.770353  444204 node_conditions.go:105] duration metric: took 3.143918ms to run NodePressure ...
	I0719 04:34:27.770366  444204 start.go:241] waiting for startup goroutines ...
	I0719 04:34:27.770373  444204 start.go:246] waiting for cluster config update ...
	I0719 04:34:27.770390  444204 start.go:255] writing updated cluster config ...
	I0719 04:34:27.770756  444204 ssh_runner.go:195] Run: rm -f paused
	I0719 04:34:28.146154  444204 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 04:34:28.150172  444204 out.go:177] * Done! kubectl is now configured to use "addons-014077" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.188292900Z" level=info msg="Stopped pod sandbox: a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712" id=92d7ee9d-71e4-48b6-b1a9-60fc8f686da5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.288215107Z" level=info msg="Removing container: 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d" id=280d5df2-cdf6-422a-a395-ea6a19bf9bcf name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 19 04:38:13 addons-014077 crio[955]: time="2024-07-19 04:38:13.302635218Z" level=info msg="Removed container 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d: ingress-nginx/ingress-nginx-controller-6d9bd977d4-hhzhm/controller" id=280d5df2-cdf6-422a-a395-ea6a19bf9bcf name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.667010963Z" level=info msg="Removing container: c27274b47addad435d9db0c172c51d157d3aa906fecac257dee2f53326390f56" id=207da587-aa19-4679-9d3a-e1fc5b243986 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.681526120Z" level=info msg="Removed container c27274b47addad435d9db0c172c51d157d3aa906fecac257dee2f53326390f56: ingress-nginx/ingress-nginx-admission-patch-zzm6q/patch" id=207da587-aa19-4679-9d3a-e1fc5b243986 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.683041076Z" level=info msg="Removing container: c930aaf789c045edaf0b309779ba919dc680bf490944c5d6a6e11e6201d97ed2" id=ddd4421b-8405-46b1-878c-da55c1f0838d name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.701302920Z" level=info msg="Removed container c930aaf789c045edaf0b309779ba919dc680bf490944c5d6a6e11e6201d97ed2: ingress-nginx/ingress-nginx-admission-create-f4cmw/create" id=ddd4421b-8405-46b1-878c-da55c1f0838d name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.702778771Z" level=info msg="Stopping pod sandbox: a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712" id=24178b5d-473b-40cd-9c3b-a0ca0e3b166d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.702826220Z" level=info msg="Stopped pod sandbox (already stopped): a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712" id=24178b5d-473b-40cd-9c3b-a0ca0e3b166d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.703117282Z" level=info msg="Removing pod sandbox: a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712" id=f8b9f272-e493-4cdf-bbe5-5bdbd41f682c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.718091480Z" level=info msg="Removed pod sandbox: a550d154a9854a75bb29715eddf9b89acef18b496340e69416f81e91b8634712" id=f8b9f272-e493-4cdf-bbe5-5bdbd41f682c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.718692383Z" level=info msg="Stopping pod sandbox: 82f4768f72682808511de374d2ca8f023fec9647da434c60e8b95bf6b220f1c1" id=a583f8dc-937b-4538-8093-89dbaf38b241 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.718838218Z" level=info msg="Stopped pod sandbox (already stopped): 82f4768f72682808511de374d2ca8f023fec9647da434c60e8b95bf6b220f1c1" id=a583f8dc-937b-4538-8093-89dbaf38b241 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.719239668Z" level=info msg="Removing pod sandbox: 82f4768f72682808511de374d2ca8f023fec9647da434c60e8b95bf6b220f1c1" id=0d2541a5-7f94-465b-be11-59df7c1d26e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.730885352Z" level=info msg="Removed pod sandbox: 82f4768f72682808511de374d2ca8f023fec9647da434c60e8b95bf6b220f1c1" id=0d2541a5-7f94-465b-be11-59df7c1d26e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.731451959Z" level=info msg="Stopping pod sandbox: 854321e73e285d7e11d099a3e981f92786d67d33444d1233d71677d41cb8202f" id=270a222b-00cd-430f-9d98-74222ada2b79 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.731489291Z" level=info msg="Stopped pod sandbox (already stopped): 854321e73e285d7e11d099a3e981f92786d67d33444d1233d71677d41cb8202f" id=270a222b-00cd-430f-9d98-74222ada2b79 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.731783298Z" level=info msg="Removing pod sandbox: 854321e73e285d7e11d099a3e981f92786d67d33444d1233d71677d41cb8202f" id=d8e74c95-2fed-4d6a-beee-0c0627e7eb78 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:39:09 addons-014077 crio[955]: time="2024-07-19 04:39:09.741659780Z" level=info msg="Removed pod sandbox: 854321e73e285d7e11d099a3e981f92786d67d33444d1233d71677d41cb8202f" id=d8e74c95-2fed-4d6a-beee-0c0627e7eb78 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 19 04:40:25 addons-014077 crio[955]: time="2024-07-19 04:40:25.881118195Z" level=info msg="Stopping container: 6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e (timeout: 30s)" id=8b0f8024-6a9a-48c1-b7bc-7f51dff1666d name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 04:40:27 addons-014077 crio[955]: time="2024-07-19 04:40:27.063087438Z" level=info msg="Stopped container 6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e: kube-system/metrics-server-c59844bb4-6s6pb/metrics-server" id=8b0f8024-6a9a-48c1-b7bc-7f51dff1666d name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 04:40:27 addons-014077 crio[955]: time="2024-07-19 04:40:27.063988421Z" level=info msg="Stopping pod sandbox: 87ecfe95f56e7714f5cef2d5affe9b0df960b2bace173fe4162dc367e3221db3" id=4006095f-d728-4c73-bcca-50be03d12122 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 04:40:27 addons-014077 crio[955]: time="2024-07-19 04:40:27.064238211Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-6s6pb Namespace:kube-system ID:87ecfe95f56e7714f5cef2d5affe9b0df960b2bace173fe4162dc367e3221db3 UID:f1e51548-a1be-4356-a620-a46631404c83 NetNS:/var/run/netns/1808b94c-b676-48ba-b7ad-d24fd45b4e9b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 19 04:40:27 addons-014077 crio[955]: time="2024-07-19 04:40:27.064404369Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-6s6pb from CNI network \"kindnet\" (type=ptp)"
	Jul 19 04:40:27 addons-014077 crio[955]: time="2024-07-19 04:40:27.112169209Z" level=info msg="Stopped pod sandbox: 87ecfe95f56e7714f5cef2d5affe9b0df960b2bace173fe4162dc367e3221db3" id=4006095f-d728-4c73-bcca-50be03d12122 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5677cf77d98b7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   164550c89b554       hello-world-app-6778b5fc9f-6lgzw
	c73156257c17e       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         4 minutes ago       Running             nginx                     0                   94a6bb357355d       nginx
	033f70c67db54       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   f2a6bd02665c1       headlamp-7867546754-m7m9q
	149ec8654a558       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            7 minutes ago       Running             gcp-auth                  0                   c9e1618f979fe       gcp-auth-5db96cd9b4-2vhm2
	6cacd68e62b0f       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70   7 minutes ago       Exited              metrics-server            0                   87ecfe95f56e7       metrics-server-c59844bb4-6s6pb
	7f5573e7293ce       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                         7 minutes ago       Running             yakd                      0                   0ab3cb33d37f2       yakd-dashboard-799879c74f-m6mc8
	47eb3c5df2ae6       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                        8 minutes ago       Running             coredns                   0                   c2fc3f29ccbd5       coredns-7db6d8ff4d-p5jz6
	0d31eb5555ec5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   1c8a39fac61db       storage-provisioner
	1507af39dbddd       5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2                                                        9 minutes ago       Running             kindnet-cni               0                   a0007a3207a35       kindnet-dl4zb
	e3ddf4b7cc27b       2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be                                                        9 minutes ago       Running             kube-proxy                0                   d7be84e46f2ba       kube-proxy-hqgw8
	fd7560349cf23       61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca                                                        9 minutes ago       Running             kube-apiserver            0                   74fde2c5c58b0       kube-apiserver-addons-014077
	118afcbd626f2       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                        9 minutes ago       Running             etcd                      0                   a69329f928f84       etcd-addons-014077
	6bf5bc299cd70       8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a                                                        9 minutes ago       Running             kube-controller-manager   0                   20d576ec51462       kube-controller-manager-addons-014077
	23e017a8ce2dc       d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355                                                        9 minutes ago       Running             kube-scheduler            0                   2af17c336b6a1       kube-scheduler-addons-014077
	
	
	==> coredns [47eb3c5df2ae6ec619849950ac537b898b7bd27652ee0ff5d56efd232a91e563] <==
	[INFO] 10.244.0.8:54880 - 15349 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002200552s
	[INFO] 10.244.0.8:45485 - 13803 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067289s
	[INFO] 10.244.0.8:45485 - 40951 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068068s
	[INFO] 10.244.0.8:36893 - 30763 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100691s
	[INFO] 10.244.0.8:36893 - 40228 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000056392s
	[INFO] 10.244.0.8:38493 - 1373 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047605s
	[INFO] 10.244.0.8:38493 - 47707 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008753s
	[INFO] 10.244.0.8:52826 - 21884 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006102s
	[INFO] 10.244.0.8:52826 - 22654 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000102824s
	[INFO] 10.244.0.8:43570 - 8978 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001377713s
	[INFO] 10.244.0.8:43570 - 56351 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001613293s
	[INFO] 10.244.0.8:34405 - 8222 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011157s
	[INFO] 10.244.0.8:34405 - 52253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110364s
	[INFO] 10.244.0.19:52572 - 17396 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001345894s
	[INFO] 10.244.0.19:44051 - 33634 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001684175s
	[INFO] 10.244.0.19:47354 - 23295 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149954s
	[INFO] 10.244.0.19:34007 - 3216 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100576s
	[INFO] 10.244.0.19:53550 - 8188 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000213918s
	[INFO] 10.244.0.19:55000 - 25645 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000240511s
	[INFO] 10.244.0.19:53449 - 4553 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003487052s
	[INFO] 10.244.0.19:37648 - 23208 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002897577s
	[INFO] 10.244.0.19:47195 - 60096 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000732503s
	[INFO] 10.244.0.19:50474 - 14845 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000784071s
	[INFO] 10.244.0.22:40142 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000183601s
	[INFO] 10.244.0.22:42281 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103907s
	
	
	==> describe nodes <==
	Name:               addons-014077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-014077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=addons-014077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_31_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-014077
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:31:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-014077
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:40:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:38:44 +0000   Fri, 19 Jul 2024 04:31:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:38:44 +0000   Fri, 19 Jul 2024 04:31:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:38:44 +0000   Fri, 19 Jul 2024 04:31:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:38:44 +0000   Fri, 19 Jul 2024 04:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-014077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c3f755ec435453e838245b0ce3ffe74
	  System UUID:                9ee520c4-3132-4006-9e00-175e4d3922ed
	  Boot ID:                    7603d686-a653-4d15-b2a5-a492bcccfba1
	  Kernel Version:             5.15.0-1064-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-6lgzw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  gcp-auth                    gcp-auth-5db96cd9b4-2vhm2                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  headlamp                    headlamp-7867546754-m7m9q                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 coredns-7db6d8ff4d-p5jz6                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     9m8s
	  kube-system                 etcd-addons-014077                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kindnet-dl4zb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9m8s
	  kube-system                 kube-apiserver-addons-014077             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-addons-014077    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-hqgw8                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-addons-014077             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  yakd-dashboard              yakd-dashboard-799879c74f-m6mc8          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m2s                   kube-proxy       
	  Normal  Starting                 9m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m29s (x8 over 9m29s)  kubelet          Node addons-014077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m29s (x8 over 9m29s)  kubelet          Node addons-014077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m29s (x8 over 9m29s)  kubelet          Node addons-014077 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node addons-014077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node addons-014077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node addons-014077 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m9s                   node-controller  Node addons-014077 event: Registered Node addons-014077 in Controller
	  Normal  NodeReady                8m24s                  kubelet          Node addons-014077 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000685] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000913] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=000000007d7c89ef
	[  +0.001018] FS-Cache: N-key=[8] '85cfc90000000000'
	[  +0.002691] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000940] FS-Cache: O-cookie d=00000000f601f7e6{9p.inode} n=00000000bbe59228
	[  +0.001041] FS-Cache: O-key=[8] '85cfc90000000000'
	[  +0.000697] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000974] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=00000000d3ec7a89
	[  +0.001028] FS-Cache: N-key=[8] '85cfc90000000000'
	[  +2.435117] FS-Cache: Duplicate cookie detected
	[  +0.000680] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000972] FS-Cache: O-cookie d=00000000f601f7e6{9p.inode} n=00000000e0a278fc
	[  +0.001019] FS-Cache: O-key=[8] '84cfc90000000000'
	[  +0.000743] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001013] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=0000000024481a06
	[  +0.001026] FS-Cache: N-key=[8] '84cfc90000000000'
	[  +0.383436] FS-Cache: Duplicate cookie detected
	[  +0.000706] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000957] FS-Cache: O-cookie d=00000000f601f7e6{9p.inode} n=00000000a8bac1f9
	[  +0.001038] FS-Cache: O-key=[8] '90cfc90000000000'
	[  +0.000738] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000917] FS-Cache: N-cookie d=00000000f601f7e6{9p.inode} n=000000007d7c89ef
	[  +0.001022] FS-Cache: N-key=[8] '90cfc90000000000'
	[Jul19 03:57] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [118afcbd626f20378f1cc44c8e83e7c237fa72c472ed700e5c43aa591f7660b6] <==
	{"level":"info","ts":"2024-07-19T04:30:59.532737Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-07-19T04:30:59.554996Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T04:30:59.555534Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T04:30:59.555338Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T04:30:59.556614Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-19T04:30:59.556541Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T04:30:59.90248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T04:30:59.902598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T04:30:59.902642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-07-19T04:30:59.902691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.902727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.902768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.902802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-19T04:30:59.910641Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-014077 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T04:30:59.910745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:30:59.911892Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.92007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T04:30:59.92245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:30:59.934503Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.966886Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.96696Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:30:59.968479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-19T04:30:59.935281Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T04:30:59.974553Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T04:31:22.571323Z","caller":"traceutil/trace.go:171","msg":"trace[1670416768] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"113.146053ms","start":"2024-07-19T04:31:22.451296Z","end":"2024-07-19T04:31:22.564442Z","steps":["trace[1670416768] 'process raft request'  (duration: 108.768611ms)"],"step_count":1}
	
	
	==> gcp-auth [149ec8654a55866748310882ab7e6d58110ad7a4453225a9ecff226dacbc5eba] <==
	2024/07/19 04:32:56 GCP Auth Webhook started!
	2024/07/19 04:34:29 Ready to marshal response ...
	2024/07/19 04:34:29 Ready to write response ...
	2024/07/19 04:34:29 Ready to marshal response ...
	2024/07/19 04:34:29 Ready to write response ...
	2024/07/19 04:34:29 Ready to marshal response ...
	2024/07/19 04:34:29 Ready to write response ...
	2024/07/19 04:34:39 Ready to marshal response ...
	2024/07/19 04:34:39 Ready to write response ...
	2024/07/19 04:34:45 Ready to marshal response ...
	2024/07/19 04:34:45 Ready to write response ...
	2024/07/19 04:34:45 Ready to marshal response ...
	2024/07/19 04:34:45 Ready to write response ...
	2024/07/19 04:34:53 Ready to marshal response ...
	2024/07/19 04:34:53 Ready to write response ...
	2024/07/19 04:35:04 Ready to marshal response ...
	2024/07/19 04:35:04 Ready to write response ...
	2024/07/19 04:35:25 Ready to marshal response ...
	2024/07/19 04:35:25 Ready to write response ...
	2024/07/19 04:35:47 Ready to marshal response ...
	2024/07/19 04:35:47 Ready to write response ...
	2024/07/19 04:38:07 Ready to marshal response ...
	2024/07/19 04:38:07 Ready to write response ...
	
	
	==> kernel <==
	 04:40:27 up  2:22,  0 users,  load average: 0.46, 0.92, 2.22
	Linux addons-014077 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [1507af39dbddd1081a9d96d7bd03fef6a58df55addb65a8a6ea8b150916776a5] <==
	I0719 04:39:12.740593       1 main.go:303] handling current node
	I0719 04:39:22.740833       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:39:22.740867       1 main.go:303] handling current node
	W0719 04:39:27.045538       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:39:27.045576       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0719 04:39:32.740820       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:39:32.740857       1 main.go:303] handling current node
	W0719 04:39:42.283067       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0719 04:39:42.283110       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0719 04:39:42.586167       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:39:42.586200       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0719 04:39:42.740677       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:39:42.740714       1 main.go:303] handling current node
	I0719 04:39:52.740762       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:39:52.740800       1 main.go:303] handling current node
	I0719 04:40:02.740521       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:40:02.740630       1 main.go:303] handling current node
	W0719 04:40:05.029171       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:40:05.029208       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0719 04:40:12.740612       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:40:12.740729       1 main.go:303] handling current node
	I0719 04:40:22.740543       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0719 04:40:22.740578       1 main.go:303] handling current node
	W0719 04:40:26.716301       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0719 04:40:26.716333       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [fd7560349cf230c26ae717e029b1001f57e3347edf0e0deea2a7ad027d33028a] <==
	E0719 04:33:53.980480       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0719 04:33:53.981595       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.130.223:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.130.223:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.130.223:443: connect: connection refused
	I0719 04:33:54.052371       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 04:34:29.039523       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.21.55"}
	E0719 04:35:09.215912       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0719 04:35:16.767225       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0719 04:35:41.908438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.908618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:41.932400       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.933583       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:41.960132       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.960184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:41.985519       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:41.985610       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:42.079606       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 04:35:42.079861       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 04:35:42.222376       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0719 04:35:42.960945       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0719 04:35:43.081290       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0719 04:35:43.155891       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0719 04:35:43.320889       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0719 04:35:47.721508       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0719 04:35:48.019984       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.251.227"}
	I0719 04:38:08.145261       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.155.184"}
	
	
	==> kube-controller-manager [6bf5bc299cd70545714e45d8297f6b9b5b6de09367d9dd23ff440b2aad2dc05c] <==
	E0719 04:38:16.920112       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 04:38:20.066338       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0719 04:38:27.388436       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:38:27.388560       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:38:27.522733       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:38:27.522782       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:38:45.538514       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:38:45.538553       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:39:07.431187       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:39:07.431225       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:39:08.139333       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:39:08.139371       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:39:12.944213       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:39:12.944251       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:39:26.857892       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:39:26.857932       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:39:51.128689       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:39:51.128731       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:39:55.343243       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:39:55.343285       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:40:02.066795       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:40:02.066876       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 04:40:02.192764       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 04:40:02.192820       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 04:40:25.856857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.857µs"
	
	
	==> kube-proxy [e3ddf4b7cc27b99ca1a20ac534fc413eb7b81dc83d792d559ecab971e9a56f57] <==
	I0719 04:31:24.851587       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:31:25.063014       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0719 04:31:25.283660       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0719 04:31:25.283733       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:31:25.289457       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0719 04:31:25.289554       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0719 04:31:25.289600       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:31:25.289823       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:31:25.289872       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:31:25.308360       1 config.go:192] "Starting service config controller"
	I0719 04:31:25.308393       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:31:25.308439       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:31:25.308444       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:31:25.310248       1 config.go:319] "Starting node config controller"
	I0719 04:31:25.310264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:31:25.408531       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:31:25.409459       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:31:25.410977       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23e017a8ce2dcd4012683607185dffc3c74c3b42ee6249e03e6c44f5c07c2eb1] <==
	W0719 04:31:03.396970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:31:03.397041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:31:03.397132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:03.397171       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:03.397254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:31:03.397305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:31:03.397417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:31:03.397430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:31:03.397506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:03.397517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:03.397577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 04:31:03.397587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 04:31:04.201787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:31:04.201921       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:31:04.326697       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:31:04.326813       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:31:04.334483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:31:04.334724       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 04:31:04.382256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:31:04.382418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 04:31:04.401563       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 04:31:04.401681       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 04:31:04.435128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:31:04.435170       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0719 04:31:07.085523       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.286694    1519 scope.go:117] "RemoveContainer" containerID="53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.302882    1519 scope.go:117] "RemoveContainer" containerID="53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: E0719 04:38:13.303290    1519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d\": container with ID starting with 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d not found: ID does not exist" containerID="53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.303330    1519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d"} err="failed to get container status \"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d\": rpc error: code = NotFound desc = could not find container \"53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d\": container with ID starting with 53bf5ea7cac369f69705e0fb4bcb97dda94a95ceda448f150f9350ad1c8e451d not found: ID does not exist"
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.313968    1519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wnl5\" (UniqueName: \"kubernetes.io/projected/01da5791-1cd6-42e8-be85-01f653c78ec1-kube-api-access-7wnl5\") pod \"01da5791-1cd6-42e8-be85-01f653c78ec1\" (UID: \"01da5791-1cd6-42e8-be85-01f653c78ec1\") "
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.314029    1519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01da5791-1cd6-42e8-be85-01f653c78ec1-webhook-cert\") pod \"01da5791-1cd6-42e8-be85-01f653c78ec1\" (UID: \"01da5791-1cd6-42e8-be85-01f653c78ec1\") "
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.316517    1519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01da5791-1cd6-42e8-be85-01f653c78ec1-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "01da5791-1cd6-42e8-be85-01f653c78ec1" (UID: "01da5791-1cd6-42e8-be85-01f653c78ec1"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.319091    1519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01da5791-1cd6-42e8-be85-01f653c78ec1-kube-api-access-7wnl5" (OuterVolumeSpecName: "kube-api-access-7wnl5") pod "01da5791-1cd6-42e8-be85-01f653c78ec1" (UID: "01da5791-1cd6-42e8-be85-01f653c78ec1"). InnerVolumeSpecName "kube-api-access-7wnl5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.414647    1519 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7wnl5\" (UniqueName: \"kubernetes.io/projected/01da5791-1cd6-42e8-be85-01f653c78ec1-kube-api-access-7wnl5\") on node \"addons-014077\" DevicePath \"\""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.414685    1519 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01da5791-1cd6-42e8-be85-01f653c78ec1-webhook-cert\") on node \"addons-014077\" DevicePath \"\""
	Jul 19 04:38:13 addons-014077 kubelet[1519]: I0719 04:38:13.893854    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01da5791-1cd6-42e8-be85-01f653c78ec1" path="/var/lib/kubelet/pods/01da5791-1cd6-42e8-be85-01f653c78ec1/volumes"
	Jul 19 04:39:09 addons-014077 kubelet[1519]: I0719 04:39:09.665892    1519 scope.go:117] "RemoveContainer" containerID="c27274b47addad435d9db0c172c51d157d3aa906fecac257dee2f53326390f56"
	Jul 19 04:39:09 addons-014077 kubelet[1519]: I0719 04:39:09.681851    1519 scope.go:117] "RemoveContainer" containerID="c930aaf789c045edaf0b309779ba919dc680bf490944c5d6a6e11e6201d97ed2"
	Jul 19 04:40:25 addons-014077 kubelet[1519]: I0719 04:40:25.879669    1519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-6lgzw" podStartSLOduration=137.814476553 podStartE2EDuration="2m18.87965118s" podCreationTimestamp="2024-07-19 04:38:07 +0000 UTC" firstStartedPulling="2024-07-19 04:38:08.304227355 +0000 UTC m=+422.560887953" lastFinishedPulling="2024-07-19 04:38:09.369401981 +0000 UTC m=+423.626062580" observedRunningTime="2024-07-19 04:38:10.289677921 +0000 UTC m=+424.546338528" watchObservedRunningTime="2024-07-19 04:40:25.87965118 +0000 UTC m=+560.136311787"
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.192772    1519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f1e51548-a1be-4356-a620-a46631404c83-tmp-dir\") pod \"f1e51548-a1be-4356-a620-a46631404c83\" (UID: \"f1e51548-a1be-4356-a620-a46631404c83\") "
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.192843    1519 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvr4d\" (UniqueName: \"kubernetes.io/projected/f1e51548-a1be-4356-a620-a46631404c83-kube-api-access-wvr4d\") pod \"f1e51548-a1be-4356-a620-a46631404c83\" (UID: \"f1e51548-a1be-4356-a620-a46631404c83\") "
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.193437    1519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1e51548-a1be-4356-a620-a46631404c83-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f1e51548-a1be-4356-a620-a46631404c83" (UID: "f1e51548-a1be-4356-a620-a46631404c83"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.204574    1519 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e51548-a1be-4356-a620-a46631404c83-kube-api-access-wvr4d" (OuterVolumeSpecName: "kube-api-access-wvr4d") pod "f1e51548-a1be-4356-a620-a46631404c83" (UID: "f1e51548-a1be-4356-a620-a46631404c83"). InnerVolumeSpecName "kube-api-access-wvr4d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.294078    1519 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f1e51548-a1be-4356-a620-a46631404c83-tmp-dir\") on node \"addons-014077\" DevicePath \"\""
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.294118    1519 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wvr4d\" (UniqueName: \"kubernetes.io/projected/f1e51548-a1be-4356-a620-a46631404c83-kube-api-access-wvr4d\") on node \"addons-014077\" DevicePath \"\""
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.555249    1519 scope.go:117] "RemoveContainer" containerID="6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e"
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.584124    1519 scope.go:117] "RemoveContainer" containerID="6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e"
	Jul 19 04:40:27 addons-014077 kubelet[1519]: E0719 04:40:27.584534    1519 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e\": container with ID starting with 6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e not found: ID does not exist" containerID="6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e"
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.584568    1519 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e"} err="failed to get container status \"6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e\": rpc error: code = NotFound desc = could not find container \"6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e\": container with ID starting with 6cacd68e62b0fdb11c19f4986be59c3ba0652069809fb2d0031572560f43c48e not found: ID does not exist"
	Jul 19 04:40:27 addons-014077 kubelet[1519]: I0719 04:40:27.893570    1519 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e51548-a1be-4356-a620-a46631404c83" path="/var/lib/kubelet/pods/f1e51548-a1be-4356-a620-a46631404c83/volumes"
	
	
	==> storage-provisioner [0d31eb5555ec5997398ddfc6570a006b5fa2a1f15a5bae69e32f41d81d50c4c5] <==
	I0719 04:32:03.977001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 04:32:04.044050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 04:32:04.044170       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 04:32:04.143880       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 04:32:04.144168       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-014077_d4190dd4-3867-4cea-83e4-0ad52c429814!
	I0719 04:32:04.163730       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fadddc2b-9cda-4345-a2a3-97b816911dce", APIVersion:"v1", ResourceVersion:"942", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-014077_d4190dd4-3867-4cea-83e4-0ad52c429814 became leader
	I0719 04:32:04.244979       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-014077_d4190dd4-3867-4cea-83e4-0ad52c429814!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-014077 -n addons-014077
helpers_test.go:261: (dbg) Run:  kubectl --context addons-014077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (286.14s)

                                                
                                    

Test pass (301/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.30.3/json-events 7.08
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.22
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 7.88
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.17
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.35
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.19
30 TestBinaryMirror 0.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 247.4
38 TestAddons/parallel/Registry 16.01
40 TestAddons/parallel/InspektorGadget 10.94
44 TestAddons/parallel/CSI 45.17
45 TestAddons/parallel/Headlamp 11.95
46 TestAddons/parallel/CloudSpanner 6.56
47 TestAddons/parallel/LocalPath 51.34
48 TestAddons/parallel/NvidiaDevicePlugin 6.52
49 TestAddons/parallel/Yakd 5.01
53 TestAddons/serial/GCPAuth/Namespaces 0.18
54 TestAddons/StoppedEnableDisable 12.21
55 TestCertOptions 37.19
56 TestCertExpiration 256.44
58 TestForceSystemdFlag 41.35
59 TestForceSystemdEnv 36.55
65 TestErrorSpam/setup 30
66 TestErrorSpam/start 0.69
67 TestErrorSpam/status 0.97
68 TestErrorSpam/pause 1.64
69 TestErrorSpam/unpause 1.71
70 TestErrorSpam/stop 1.37
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 57.58
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 54.61
77 TestFunctional/serial/KubeContext 0.07
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.13
82 TestFunctional/serial/CacheCmd/cache/add_local 1.04
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
87 TestFunctional/serial/CacheCmd/cache/delete 0.11
88 TestFunctional/serial/MinikubeKubectlCmd 0.2
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
90 TestFunctional/serial/ExtraConfig 31.77
91 TestFunctional/serial/ComponentHealth 0.1
92 TestFunctional/serial/LogsCmd 1.68
93 TestFunctional/serial/LogsFileCmd 1.69
94 TestFunctional/serial/InvalidService 4.71
96 TestFunctional/parallel/ConfigCmd 0.44
97 TestFunctional/parallel/DashboardCmd 7.62
98 TestFunctional/parallel/DryRun 0.64
99 TestFunctional/parallel/InternationalLanguage 0.19
100 TestFunctional/parallel/StatusCmd 1.32
104 TestFunctional/parallel/ServiceCmdConnect 13.67
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 26.31
108 TestFunctional/parallel/SSHCmd 0.65
109 TestFunctional/parallel/CpCmd 1.82
111 TestFunctional/parallel/FileSync 0.38
112 TestFunctional/parallel/CertSync 1.95
116 TestFunctional/parallel/NodeLabels 0.09
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
120 TestFunctional/parallel/License 0.31
121 TestFunctional/parallel/Version/short 0.06
122 TestFunctional/parallel/Version/components 1.24
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.84
128 TestFunctional/parallel/ImageCommands/Setup 0.76
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.52
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.38
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
141 TestFunctional/parallel/ImageCommands/ImageRemove 1.02
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.02
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
152 TestFunctional/parallel/ServiceCmd/List 0.6
153 TestFunctional/parallel/ProfileCmd/profile_list 0.5
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
155 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
156 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
157 TestFunctional/parallel/ServiceCmd/Format 0.48
158 TestFunctional/parallel/ServiceCmd/URL 0.45
159 TestFunctional/parallel/MountCmd/any-port 11.34
160 TestFunctional/parallel/MountCmd/specific-port 2.1
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/StartCluster 180.64
169 TestMultiControlPlane/serial/DeployApp 7.45
170 TestMultiControlPlane/serial/PingHostFromPods 1.55
171 TestMultiControlPlane/serial/AddWorkerNode 35.79
172 TestMultiControlPlane/serial/NodeLabels 0.12
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
174 TestMultiControlPlane/serial/CopyFile 18.06
175 TestMultiControlPlane/serial/StopSecondaryNode 12.71
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
177 TestMultiControlPlane/serial/RestartSecondaryNode 21.34
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 13.83
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 201.81
180 TestMultiControlPlane/serial/DeleteSecondaryNode 13.38
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
182 TestMultiControlPlane/serial/StopCluster 35.69
183 TestMultiControlPlane/serial/RestartCluster 118.82
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
185 TestMultiControlPlane/serial/AddSecondaryNode 71.35
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
190 TestJSONOutput/start/Command 85.64
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.74
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.69
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.89
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.2
215 TestKicCustomNetwork/create_custom_network 39.05
216 TestKicCustomNetwork/use_default_bridge_network 33.66
217 TestKicExistingNetwork 33.43
218 TestKicCustomSubnet 33.05
219 TestKicStaticIP 34.22
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 66.66
224 TestMountStart/serial/StartWithMountFirst 6.91
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 6.45
227 TestMountStart/serial/VerifyMountSecond 0.25
228 TestMountStart/serial/DeleteFirst 1.59
229 TestMountStart/serial/VerifyMountPostDelete 0.28
230 TestMountStart/serial/Stop 1.21
231 TestMountStart/serial/RestartStopped 8.04
232 TestMountStart/serial/VerifyMountPostStop 0.24
235 TestMultiNode/serial/FreshStart2Nodes 114.71
236 TestMultiNode/serial/DeployApp2Nodes 5.48
237 TestMultiNode/serial/PingHostFrom2Pods 0.98
238 TestMultiNode/serial/AddNode 26.74
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.32
241 TestMultiNode/serial/CopyFile 9.82
242 TestMultiNode/serial/StopNode 2.24
243 TestMultiNode/serial/StartAfterStop 10
244 TestMultiNode/serial/RestartKeepsNodes 89.64
245 TestMultiNode/serial/DeleteNode 5.21
246 TestMultiNode/serial/StopMultiNode 23.87
247 TestMultiNode/serial/RestartMultiNode 61.02
248 TestMultiNode/serial/ValidateNameConflict 33.16
253 TestPreload 124.72
255 TestScheduledStopUnix 107.83
258 TestInsufficientStorage 10.43
259 TestRunningBinaryUpgrade 70.34
261 TestKubernetesUpgrade 382.48
262 TestMissingContainerUpgrade 164.45
264 TestPause/serial/Start 97.04
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
267 TestNoKubernetes/serial/StartWithK8s 43.37
268 TestNoKubernetes/serial/StartWithStopK8s 13.88
269 TestNoKubernetes/serial/Start 6.08
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
271 TestNoKubernetes/serial/ProfileList 1
272 TestNoKubernetes/serial/Stop 1.23
273 TestNoKubernetes/serial/StartNoArgs 7.74
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
282 TestNetworkPlugins/group/false 3.64
286 TestPause/serial/SecondStartNoReconfiguration 30.87
287 TestPause/serial/Pause 0.86
288 TestPause/serial/VerifyStatus 0.4
289 TestPause/serial/Unpause 1.39
290 TestPause/serial/PauseAgain 1.33
291 TestPause/serial/DeletePaused 3.04
292 TestPause/serial/VerifyDeletedResources 0.44
293 TestStoppedBinaryUpgrade/Setup 1.2
294 TestStoppedBinaryUpgrade/Upgrade 86.53
302 TestNetworkPlugins/group/auto/Start 99.09
303 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
304 TestNetworkPlugins/group/kindnet/Start 92.47
305 TestNetworkPlugins/group/auto/KubeletFlags 0.28
306 TestNetworkPlugins/group/auto/NetCatPod 10.28
307 TestNetworkPlugins/group/auto/DNS 0.19
308 TestNetworkPlugins/group/auto/Localhost 0.16
309 TestNetworkPlugins/group/auto/HairPin 0.17
310 TestNetworkPlugins/group/calico/Start 71.71
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
313 TestNetworkPlugins/group/kindnet/NetCatPod 13.35
314 TestNetworkPlugins/group/kindnet/DNS 0.19
315 TestNetworkPlugins/group/kindnet/Localhost 0.17
316 TestNetworkPlugins/group/kindnet/HairPin 0.19
317 TestNetworkPlugins/group/custom-flannel/Start 76.89
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.34
320 TestNetworkPlugins/group/calico/NetCatPod 14.32
321 TestNetworkPlugins/group/calico/DNS 0.22
322 TestNetworkPlugins/group/calico/Localhost 0.16
323 TestNetworkPlugins/group/calico/HairPin 0.17
324 TestNetworkPlugins/group/enable-default-cni/Start 46.51
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.37
327 TestNetworkPlugins/group/custom-flannel/DNS 0.17
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.35
332 TestNetworkPlugins/group/flannel/Start 74.47
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
336 TestNetworkPlugins/group/bridge/Start 90.14
337 TestNetworkPlugins/group/flannel/ControllerPod 6
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
339 TestNetworkPlugins/group/flannel/NetCatPod 11.42
340 TestNetworkPlugins/group/flannel/DNS 0.18
341 TestNetworkPlugins/group/flannel/Localhost 0.16
342 TestNetworkPlugins/group/flannel/HairPin 0.16
344 TestStartStop/group/old-k8s-version/serial/FirstStart 184.41
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
346 TestNetworkPlugins/group/bridge/NetCatPod 12.44
347 TestNetworkPlugins/group/bridge/DNS 0.19
348 TestNetworkPlugins/group/bridge/Localhost 0.18
349 TestNetworkPlugins/group/bridge/HairPin 0.15
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.78
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
356 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 277.81
357 TestStartStop/group/old-k8s-version/serial/DeployApp 8.59
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.49
359 TestStartStop/group/old-k8s-version/serial/Stop 12.66
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
361 TestStartStop/group/old-k8s-version/serial/SecondStart 140.14
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
364 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
365 TestStartStop/group/old-k8s-version/serial/Pause 2.94
367 TestStartStop/group/embed-certs/serial/FirstStart 89.46
368 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
369 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
370 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.19
372 TestStartStop/group/embed-certs/serial/DeployApp 9.56
374 TestStartStop/group/no-preload/serial/FirstStart 66.44
375 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
376 TestStartStop/group/embed-certs/serial/Stop 13.81
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
378 TestStartStop/group/embed-certs/serial/SecondStart 275.8
379 TestStartStop/group/no-preload/serial/DeployApp 8.44
380 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
381 TestStartStop/group/no-preload/serial/Stop 11.96
382 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
383 TestStartStop/group/no-preload/serial/SecondStart 268.88
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/embed-certs/serial/Pause 3.03
389 TestStartStop/group/newest-cni/serial/FirstStart 41.63
390 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/newest-cni/serial/DeployApp 0
392 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
393 TestStartStop/group/newest-cni/serial/Stop 1.29
394 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
395 TestStartStop/group/newest-cni/serial/SecondStart 18.27
396 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
397 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
398 TestStartStop/group/no-preload/serial/Pause 4.52
399 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
401 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
402 TestStartStop/group/newest-cni/serial/Pause 2.76
x
+
TestDownloadOnly/v1.20.0/json-events (8.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-596201 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-596201 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.443924826s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-596201
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-596201: exit status 85 (74.870973ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-596201 | jenkins | v1.33.1 | 19 Jul 24 04:29 UTC |          |
	|         | -p download-only-596201        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:29:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:29:53.969514  443159 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:29:53.969667  443159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:29:53.969691  443159 out.go:304] Setting ErrFile to fd 2...
	I0719 04:29:53.969698  443159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:29:53.969991  443159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	W0719 04:29:53.970165  443159 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19302-437615/.minikube/config/config.json: open /home/jenkins/minikube-integration/19302-437615/.minikube/config/config.json: no such file or directory
	I0719 04:29:53.970658  443159 out.go:298] Setting JSON to true
	I0719 04:29:53.971588  443159 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7939,"bootTime":1721355455,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 04:29:53.971652  443159 start.go:139] virtualization:  
	I0719 04:29:53.974697  443159 out.go:97] [download-only-596201] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0719 04:29:53.974916  443159 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 04:29:53.974950  443159 notify.go:220] Checking for updates...
	I0719 04:29:53.976742  443159 out.go:169] MINIKUBE_LOCATION=19302
	I0719 04:29:53.978425  443159 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:29:53.980694  443159 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:29:53.982213  443159 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 04:29:53.983788  443159 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0719 04:29:53.987146  443159 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 04:29:53.987459  443159 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:29:54.029686  443159 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:29:54.029799  443159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:29:54.090153  443159 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-19 04:29:54.08076604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:29:54.090262  443159 docker.go:307] overlay module found
	I0719 04:29:54.091954  443159 out.go:97] Using the docker driver based on user configuration
	I0719 04:29:54.091980  443159 start.go:297] selected driver: docker
	I0719 04:29:54.091987  443159 start.go:901] validating driver "docker" against <nil>
	I0719 04:29:54.092106  443159 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:29:54.156300  443159 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-19 04:29:54.14748378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:29:54.156463  443159 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:29:54.156757  443159 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0719 04:29:54.156909  443159 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 04:29:54.159224  443159 out.go:169] Using Docker driver with root privileges
	I0719 04:29:54.161256  443159 cni.go:84] Creating CNI manager for ""
	I0719 04:29:54.161277  443159 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:29:54.161289  443159 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:29:54.161392  443159 start.go:340] cluster config:
	{Name:download-only-596201 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-596201 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:29:54.163640  443159 out.go:97] Starting "download-only-596201" primary control-plane node in "download-only-596201" cluster
	I0719 04:29:54.163665  443159 cache.go:121] Beginning downloading kic base image for docker with crio
	I0719 04:29:54.165543  443159 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 04:29:54.165570  443159 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 04:29:54.165712  443159 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 04:29:54.181053  443159 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 04:29:54.181244  443159 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 04:29:54.181356  443159 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 04:29:54.240464  443159 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0719 04:29:54.240514  443159 cache.go:56] Caching tarball of preloaded images
	I0719 04:29:54.240683  443159 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 04:29:54.243286  443159 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 04:29:54.243312  443159 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0719 04:29:54.346656  443159 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-596201 host does not exist
	  To start a cluster, run: "minikube start -p download-only-596201"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-596201
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-248286 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-248286 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.082433761s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-248286
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-248286: exit status 85 (69.790887ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-596201 | jenkins | v1.33.1 | 19 Jul 24 04:29 UTC |                     |
	|         | -p download-only-596201        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-596201        | download-only-596201 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| start   | -o=json --download-only        | download-only-248286 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | -p download-only-248286        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:30:02
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:30:02.830544  443374 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:02.830750  443374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:02.830777  443374 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:02.830797  443374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:02.831071  443374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:30:02.831522  443374 out.go:298] Setting JSON to true
	I0719 04:30:02.832534  443374 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7948,"bootTime":1721355455,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 04:30:02.832636  443374 start.go:139] virtualization:  
	I0719 04:30:02.835022  443374 out.go:97] [download-only-248286] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0719 04:30:02.835212  443374 notify.go:220] Checking for updates...
	I0719 04:30:02.837419  443374 out.go:169] MINIKUBE_LOCATION=19302
	I0719 04:30:02.839663  443374 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:30:02.841717  443374 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:30:02.843388  443374 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 04:30:02.845376  443374 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0719 04:30:02.849449  443374 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 04:30:02.849760  443374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:30:02.871031  443374 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:30:02.871147  443374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:02.936763  443374 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 04:30:02.926088548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:02.936866  443374 docker.go:307] overlay module found
	I0719 04:30:02.938779  443374 out.go:97] Using the docker driver based on user configuration
	I0719 04:30:02.938803  443374 start.go:297] selected driver: docker
	I0719 04:30:02.938810  443374 start.go:901] validating driver "docker" against <nil>
	I0719 04:30:02.938912  443374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:02.992569  443374 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-19 04:30:02.983723446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:02.992753  443374 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:30:02.993045  443374 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0719 04:30:02.993201  443374 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 04:30:02.995057  443374 out.go:169] Using Docker driver with root privileges
	I0719 04:30:02.996417  443374 cni.go:84] Creating CNI manager for ""
	I0719 04:30:02.996435  443374 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:30:02.996446  443374 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:30:02.996529  443374 start.go:340] cluster config:
	{Name:download-only-248286 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-248286 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:30:02.998219  443374 out.go:97] Starting "download-only-248286" primary control-plane node in "download-only-248286" cluster
	I0719 04:30:02.998245  443374 cache.go:121] Beginning downloading kic base image for docker with crio
	I0719 04:30:02.999880  443374 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 04:30:02.999906  443374 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:03.000074  443374 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 04:30:03.016046  443374 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 04:30:03.016179  443374 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 04:30:03.016205  443374 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 04:30:03.016211  443374 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 04:30:03.016222  443374 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 04:30:03.066206  443374 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	I0719 04:30:03.066258  443374 cache.go:56] Caching tarball of preloaded images
	I0719 04:30:03.066458  443374 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:30:03.068381  443374 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 04:30:03.068411  443374 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4 ...
	I0719 04:30:03.177385  443374 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:bace9a3612be7d31e4d3c3d446951ced -> /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-248286 host does not exist
	  To start a cluster, run: "minikube start -p download-only-248286"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-248286
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (7.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-521092 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-521092 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.877382056s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (7.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-521092
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-521092: exit status 85 (167.366576ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-596201 | jenkins | v1.33.1 | 19 Jul 24 04:29 UTC |                     |
	|         | -p download-only-596201             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-596201             | download-only-596201 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| start   | -o=json --download-only             | download-only-248286 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | -p download-only-248286             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| delete  | -p download-only-248286             | download-only-248286 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC | 19 Jul 24 04:30 UTC |
	| start   | -o=json --download-only             | download-only-521092 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | -p download-only-521092             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:30:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:30:10.334461  443585 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:10.334578  443585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:10.334589  443585 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:10.334594  443585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:10.334850  443585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:30:10.335251  443585 out.go:298] Setting JSON to true
	I0719 04:30:10.336174  443585 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":7956,"bootTime":1721355455,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 04:30:10.336246  443585 start.go:139] virtualization:  
	I0719 04:30:10.338511  443585 out.go:97] [download-only-521092] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0719 04:30:10.338748  443585 notify.go:220] Checking for updates...
	I0719 04:30:10.341009  443585 out.go:169] MINIKUBE_LOCATION=19302
	I0719 04:30:10.342863  443585 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:30:10.344630  443585 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:30:10.346367  443585 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 04:30:10.348059  443585 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0719 04:30:10.351162  443585 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 04:30:10.351477  443585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:30:10.372186  443585 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:30:10.372314  443585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:10.444637  443585 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-19 04:30:10.435220778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:10.444771  443585 docker.go:307] overlay module found
	I0719 04:30:10.452165  443585 out.go:97] Using the docker driver based on user configuration
	I0719 04:30:10.452211  443585 start.go:297] selected driver: docker
	I0719 04:30:10.452219  443585 start.go:901] validating driver "docker" against <nil>
	I0719 04:30:10.452337  443585 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:30:10.503347  443585 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-19 04:30:10.49461459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:30:10.503524  443585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:30:10.503855  443585 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0719 04:30:10.504015  443585 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 04:30:10.505956  443585 out.go:169] Using Docker driver with root privileges
	I0719 04:30:10.507395  443585 cni.go:84] Creating CNI manager for ""
	I0719 04:30:10.507414  443585 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0719 04:30:10.507427  443585 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:30:10.507508  443585 start.go:340] cluster config:
	{Name:download-only-521092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-521092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:30:10.509212  443585 out.go:97] Starting "download-only-521092" primary control-plane node in "download-only-521092" cluster
	I0719 04:30:10.509233  443585 cache.go:121] Beginning downloading kic base image for docker with crio
	I0719 04:30:10.510951  443585 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0719 04:30:10.510980  443585 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 04:30:10.511180  443585 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0719 04:30:10.525216  443585 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0719 04:30:10.525335  443585 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0719 04:30:10.525360  443585 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0719 04:30:10.525370  443585 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0719 04:30:10.525377  443585 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0719 04:30:10.587038  443585 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I0719 04:30:10.587065  443585 cache.go:56] Caching tarball of preloaded images
	I0719 04:30:10.587241  443585 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 04:30:10.589706  443585 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 04:30:10.589731  443585 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4 ...
	I0719 04:30:10.686005  443585 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:70b5971c257ae4defe1f5d041a04e29c -> /home/jenkins/minikube-integration/19302-437615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-521092 host does not exist
	  To start a cluster, run: "minikube start -p download-only-521092"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-521092
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.19s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-028250 --alsologtostderr --binary-mirror http://127.0.0.1:33677 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-028250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-028250
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-014077
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-014077: exit status 85 (66.441801ms)

                                                
                                                
-- stdout --
	* Profile "addons-014077" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-014077"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-014077
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-014077: exit status 85 (67.811949ms)

                                                
                                                
-- stdout --
	* Profile "addons-014077" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-014077"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (247.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-014077 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-014077 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (4m7.399498319s)
--- PASS: TestAddons/Setup (247.40s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 44.670698ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-99psj" [267507bb-055e-4065-8138-ce3d5f7e0457] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011743146s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b99sl" [4ee1a72b-b280-4382-82d9-43f79c251273] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004760953s
addons_test.go:342: (dbg) Run:  kubectl --context addons-014077 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-014077 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-014077 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.960753363s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 ip
2024/07/19 04:34:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t76f9" [765fc5f6-2088-4aa2-b3b8-fe74d12b1648] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012059203s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-014077
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-014077: (5.922656828s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 8.014471ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-014077 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-014077 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8181a6d2-3aad-4099-8ae0-d61f32e33182] Pending
helpers_test.go:344: "task-pv-pod" [8181a6d2-3aad-4099-8ae0-d61f32e33182] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8181a6d2-3aad-4099-8ae0-d61f32e33182] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003453374s
addons_test.go:586: (dbg) Run:  kubectl --context addons-014077 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-014077 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-014077 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-014077 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-014077 delete pod task-pv-pod: (1.032922417s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-014077 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-014077 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-014077 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [53d861ea-f5ac-4413-8bc4-8e1667be7208] Pending
helpers_test.go:344: "task-pv-pod-restore" [53d861ea-f5ac-4413-8bc4-8e1667be7208] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [53d861ea-f5ac-4413-8bc4-8e1667be7208] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003841706s
addons_test.go:628: (dbg) Run:  kubectl --context addons-014077 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-014077 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-014077 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-014077 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.729653669s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-014077 addons disable volumesnapshots --alsologtostderr -v=1: (1.089464372s)
--- PASS: TestAddons/parallel/CSI (45.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-014077 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-m7m9q" [b59e1fa7-6164-4a1d-b4fe-f2d887fdbbb9] Pending
helpers_test.go:344: "headlamp-7867546754-m7m9q" [b59e1fa7-6164-4a1d-b4fe-f2d887fdbbb9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-m7m9q" [b59e1fa7-6164-4a1d-b4fe-f2d887fdbbb9] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003700737s
--- PASS: TestAddons/parallel/Headlamp (11.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-tgvzg" [eeb08ea2-63f8-47a3-9643-bf899e006818] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003322985s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-014077
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-014077 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-014077 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-014077 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bb37ce9f-ed11-4cde-a21d-f6bf27a9bcef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bb37ce9f-ed11-4cde-a21d-f6bf27a9bcef] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bb37ce9f-ed11-4cde-a21d-f6bf27a9bcef] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004842579s
addons_test.go:992: (dbg) Run:  kubectl --context addons-014077 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 ssh "cat /opt/local-path-provisioner/pvc-b5821a76-1b15-48b8-80bb-7ba2cf9bbdd9_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-014077 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-014077 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-014077 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-014077 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.217961397s)
--- PASS: TestAddons/parallel/LocalPath (51.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ms7rm" [e10fa14c-5d6e-4792-ba1d-e37851cd7388] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004158013s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-014077
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-m6mc8" [a43cc0cf-87c1-4aa5-9b63-db7bda005805] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004762038s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-014077 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-014077 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-014077
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-014077: (11.945301716s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-014077
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-014077
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-014077
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (37.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-006139 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-006139 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.583007473s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-006139 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-006139 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-006139 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-006139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-006139
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-006139: (1.97553239s)
--- PASS: TestCertOptions (37.19s)

                                                
                                    
x
+
TestCertExpiration (256.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-184022 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-184022 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.621963576s)
E0719 05:17:31.265873  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-184022 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-184022 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (38.322306128s)
helpers_test.go:175: Cleaning up "cert-expiration-184022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-184022
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-184022: (2.495112415s)
--- PASS: TestCertExpiration (256.44s)

                                                
                                    
x
+
TestForceSystemdFlag (41.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-687049 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-687049 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.095400437s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-687049 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-687049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-687049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-687049: (2.808746163s)
--- PASS: TestForceSystemdFlag (41.35s)

                                                
                                    
x
+
TestForceSystemdEnv (36.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-701213 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-701213 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.02609015s)
helpers_test.go:175: Cleaning up "force-systemd-env-701213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-701213
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-701213: (2.522636443s)
--- PASS: TestForceSystemdEnv (36.55s)

                                                
                                    
x
+
TestErrorSpam/setup (30s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-220793 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-220793 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-220793 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-220793 --driver=docker  --container-runtime=crio: (30.000723104s)
--- PASS: TestErrorSpam/setup (30.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 stop: (1.195519383s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-220793 --log_dir /tmp/nospam-220793 stop
--- PASS: TestErrorSpam/stop (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19302-437615/.minikube/files/etc/test/nested/copy/443154/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.58s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-402532 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-402532 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (57.575644176s)
--- PASS: TestFunctional/serial/StartWithProxy (57.58s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-402532 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-402532 --alsologtostderr -v=8: (54.610577086s)
functional_test.go:659: soft start took 54.612626413s for "functional-402532" cluster.
--- PASS: TestFunctional/serial/SoftStart (54.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-402532 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 cache add registry.k8s.io/pause:3.1: (1.367439668s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 cache add registry.k8s.io/pause:3.3: (1.410382626s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 cache add registry.k8s.io/pause:latest: (1.347556752s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-402532 /tmp/TestFunctionalserialCacheCmdcacheadd_local1803155744/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cache add minikube-local-cache-test:functional-402532
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cache delete minikube-local-cache-test:functional-402532
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-402532
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (280.780313ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 cache reload: (1.172892203s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 kubectl -- --context functional-402532 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-402532 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-402532 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-402532 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.77039012s)
functional_test.go:757: restart took 31.770490154s for "functional-402532" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-402532 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 logs: (1.679392813s)
--- PASS: TestFunctional/serial/LogsCmd (1.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 logs --file /tmp/TestFunctionalserialLogsFileCmd2139696304/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 logs --file /tmp/TestFunctionalserialLogsFileCmd2139696304/001/logs.txt: (1.6895292s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-402532 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-402532
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-402532: exit status 115 (387.622516ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30251 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-402532 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-402532 delete -f testdata/invalidsvc.yaml: (1.09490452s)
--- PASS: TestFunctional/serial/InvalidService (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 config get cpus: exit status 14 (82.387476ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 config get cpus: exit status 14 (63.645369ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-402532 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-402532 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 471692: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-402532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-402532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (276.031041ms)

                                                
                                                
-- stdout --
	* [functional-402532] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:45:04.049100  471580 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:45:04.049361  471580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:45:04.049387  471580 out.go:304] Setting ErrFile to fd 2...
	I0719 04:45:04.049407  471580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:45:04.049712  471580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:45:04.051654  471580 out.go:298] Setting JSON to false
	I0719 04:45:04.052715  471580 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8849,"bootTime":1721355455,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 04:45:04.052822  471580 start.go:139] virtualization:  
	I0719 04:45:04.056699  471580 out.go:177] * [functional-402532] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0719 04:45:04.059012  471580 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:45:04.059075  471580 notify.go:220] Checking for updates...
	I0719 04:45:04.063593  471580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:45:04.066466  471580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:45:04.068721  471580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 04:45:04.070908  471580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0719 04:45:04.073103  471580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:45:04.075633  471580 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:45:04.076217  471580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:45:04.102305  471580 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:45:04.102423  471580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:45:04.226878  471580 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-19 04:45:04.214674992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:45:04.226998  471580 docker.go:307] overlay module found
	I0719 04:45:04.229503  471580 out.go:177] * Using the docker driver based on existing profile
	I0719 04:45:04.231544  471580 start.go:297] selected driver: docker
	I0719 04:45:04.231564  471580 start.go:901] validating driver "docker" against &{Name:functional-402532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-402532 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:45:04.231690  471580 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:45:04.234543  471580 out.go:177] 
	W0719 04:45:04.236878  471580 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 04:45:04.238898  471580 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-402532 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-402532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-402532 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.433993ms)

                                                
                                                
-- stdout --
	* [functional-402532] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:45:11.459884  472385 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:45:11.460125  472385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:45:11.460152  472385 out.go:304] Setting ErrFile to fd 2...
	I0719 04:45:11.460171  472385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:45:11.460623  472385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:45:11.461065  472385 out.go:298] Setting JSON to false
	I0719 04:45:11.462154  472385 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":8857,"bootTime":1721355455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 04:45:11.462262  472385 start.go:139] virtualization:  
	I0719 04:45:11.465482  472385 out.go:177] * [functional-402532] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0719 04:45:11.468248  472385 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:45:11.468300  472385 notify.go:220] Checking for updates...
	I0719 04:45:11.470309  472385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:45:11.472022  472385 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 04:45:11.473657  472385 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 04:45:11.475607  472385 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0719 04:45:11.477441  472385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:45:11.479480  472385 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:45:11.480070  472385 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:45:11.502181  472385 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 04:45:11.502313  472385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:45:11.578771  472385 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-19 04:45:11.567437518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:45:11.578882  472385 docker.go:307] overlay module found
	I0719 04:45:11.581354  472385 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0719 04:45:11.582940  472385 start.go:297] selected driver: docker
	I0719 04:45:11.582961  472385 start.go:901] validating driver "docker" against &{Name:functional-402532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-402532 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:45:11.583111  472385 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:45:11.585684  472385 out.go:177] 
	W0719 04:45:11.587824  472385 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 04:45:11.590034  472385 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-402532 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-402532 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-44zqh" [28646887-035a-471c-a933-d6dc1440b8dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-44zqh" [28646887-035a-471c-a933-d6dc1440b8dc] Running
E0719 04:44:48.701825  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.004696274s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31364
functional_test.go:1671: http://192.168.49.2:31364: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-44zqh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31364
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [70a42dff-5a28-4fe3-8d4c-98c59479a39a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004717364s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-402532 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-402532 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-402532 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-402532 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ee5c1944-3df6-4cb6-ba1f-641586a204b1] Pending
helpers_test.go:344: "sp-pod" [ee5c1944-3df6-4cb6-ba1f-641586a204b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ee5c1944-3df6-4cb6-ba1f-641586a204b1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003497281s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-402532 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-402532 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-402532 delete -f testdata/storage-provisioner/pod.yaml: (1.256004759s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-402532 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [38696ce5-80aa-416e-b4b6-45ca185bf49e] Pending
helpers_test.go:344: "sp-pod" [38696ce5-80aa-416e-b4b6-45ca185bf49e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005022621s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-402532 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh -n functional-402532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cp functional-402532:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4033643734/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh -n functional-402532 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh -n functional-402532 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/443154/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo cat /etc/test/nested/copy/443154/hosts"
E0719 04:44:29.498413  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/443154.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo cat /etc/ssl/certs/443154.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/443154.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo cat /usr/share/ca-certificates/443154.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4431542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo cat /etc/ssl/certs/4431542.pem"
E0719 04:44:28.218019  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:44:28.224534  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:44:28.234625  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:44:28.255154  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:44:28.295402  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:44:28.375874  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4431542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo cat /usr/share/ca-certificates/4431542.pem"
E0719 04:44:28.536759  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0719 04:44:28.857912  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-402532 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh "sudo systemctl is-active docker": exit status 1 (342.497574ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh "sudo systemctl is-active containerd": exit status 1 (328.540676ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 version -o=json --components: (1.238922538s)
--- PASS: TestFunctional/parallel/Version/components (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-402532 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-402532
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-402532 image ls --format short --alsologtostderr:
I0719 04:45:13.386404  472719 out.go:291] Setting OutFile to fd 1 ...
I0719 04:45:13.386574  472719 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:13.386589  472719 out.go:304] Setting ErrFile to fd 2...
I0719 04:45:13.386594  472719 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:13.386832  472719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
I0719 04:45:13.387414  472719 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:13.387527  472719 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:13.387989  472719 cli_runner.go:164] Run: docker container inspect functional-402532 --format={{.State.Status}}
I0719 04:45:13.404826  472719 ssh_runner.go:195] Run: systemctl --version
I0719 04:45:13.404894  472719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-402532
I0719 04:45:13.421843  472719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/functional-402532/id_rsa Username:docker}
I0719 04:45:13.511057  472719 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-402532 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 61773190d42ff | 114MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5e32961ddcea3 | 90.3MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | d48f992a22722 | 61.6MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 8e97cdb19e7cc | 108MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 2351f570ed0ea | 89.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | alpine             | 5461b18aaccf3 | 46.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| docker.io/kicbase/echo-server           | functional-402532  | ce2d2cda2d858 | 4.79MB |
| docker.io/library/nginx                 | latest             | 443d199e8bfcc | 197MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-402532 image ls --format table --alsologtostderr:
I0719 04:45:14.065956  472812 out.go:291] Setting OutFile to fd 1 ...
I0719 04:45:14.066114  472812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:14.066142  472812 out.go:304] Setting ErrFile to fd 2...
I0719 04:45:14.066149  472812 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:14.066586  472812 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
I0719 04:45:14.067283  472812 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:14.067518  472812 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:14.068068  472812 cli_runner.go:164] Run: docker container inspect functional-402532 --format={{.State.Status}}
I0719 04:45:14.085774  472812 ssh_runner.go:195] Run: systemctl --version
I0719 04:45:14.085840  472812 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-402532
I0719 04:45:14.109594  472812 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/functional-402532/id_rsa Username:docker}
I0719 04:45:14.194960  472812 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-402532 image ls --format json --alsologtostderr:
[{"id":"5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493","docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"90278450"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"re
poTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4","registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"61568326"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"}
,{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:functional-402532"],"size":"4788229
"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55","docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671377"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e"],"repoTags":["docker.io/library/nginx:latest"],"si
ze":"197104786"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa
88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"113538528"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"108229958"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b8
89edf456303746e750129e4d7f20699f23e9a31acc","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"89199511"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-402532 image ls --format json --alsologtostderr:
I0719 04:45:13.841394  472781 out.go:291] Setting OutFile to fd 1 ...
I0719 04:45:13.841700  472781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:13.841760  472781 out.go:304] Setting ErrFile to fd 2...
I0719 04:45:13.841793  472781 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:13.842179  472781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
I0719 04:45:13.843352  472781 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:13.843716  472781 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:13.844942  472781 cli_runner.go:164] Run: docker container inspect functional-402532 --format={{.State.Status}}
I0719 04:45:13.862180  472781 ssh_runner.go:195] Run: systemctl --version
I0719 04:45:13.862243  472781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-402532
I0719 04:45:13.879571  472781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/functional-402532/id_rsa Username:docker}
I0719 04:45:13.967866  472781 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-402532 image ls --format yaml --alsologtostderr:
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:078d7873222b53b4530e619e8dc5bccf8420557f3be2b4996a65e59ba4a09499
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "108229958"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
- registry.k8s.io/kube-scheduler@sha256:f194dea192a672732bc45ef2e7a0bcf28080ae6bd0626bd2c444edda987d7b95
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "61568326"
- id: 5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
- docker.io/kindest/kindnetd@sha256:ca8545687e833593ef3047fdbb04957ab9a32153bc36738760b6975879ada987
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "90278450"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:30d6b23df5ccf427536840a904047f3cd946c9c78bf9750f0d82b18409d6089e
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "113538528"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
- docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62
repoTags:
- docker.io/library/nginx:alpine
size: "46671377"
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:9a3f8e8b2777851f98c569c91f8ebd6f21b0af188c245c38a0179086bb27782e
repoTags:
- docker.io/library/nginx:latest
size: "197104786"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22d1f9b0734b7dbb2266b889edf456303746e750129e4d7f20699f23e9a31acc
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "89199511"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:functional-402532
size: "4788229"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-402532 image ls --format yaml --alsologtostderr:
I0719 04:45:13.611765  472750 out.go:291] Setting OutFile to fd 1 ...
I0719 04:45:13.611950  472750 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:13.611983  472750 out.go:304] Setting ErrFile to fd 2...
I0719 04:45:13.612003  472750 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:13.612271  472750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
I0719 04:45:13.612992  472750 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:13.613172  472750 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:13.613727  472750 cli_runner.go:164] Run: docker container inspect functional-402532 --format={{.State.Status}}
I0719 04:45:13.630936  472750 ssh_runner.go:195] Run: systemctl --version
I0719 04:45:13.630988  472750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-402532
I0719 04:45:13.648987  472750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/functional-402532/id_rsa Username:docker}
I0719 04:45:13.742947  472750 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh pgrep buildkitd: exit status 1 (252.380119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image build -t localhost/my-image:functional-402532 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 image build -t localhost/my-image:functional-402532 testdata/build --alsologtostderr: (2.326965142s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-402532 image build -t localhost/my-image:functional-402532 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3bc6c962f32
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-402532
--> 76105da5c32
Successfully tagged localhost/my-image:functional-402532
76105da5c324cdd25ae6868c6d67330622262f7a441e40460bf45eeaddc6d867
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-402532 image build -t localhost/my-image:functional-402532 testdata/build --alsologtostderr:
I0719 04:45:14.542245  472899 out.go:291] Setting OutFile to fd 1 ...
I0719 04:45:14.542897  472899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:14.542934  472899 out.go:304] Setting ErrFile to fd 2...
I0719 04:45:14.542957  472899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:45:14.543237  472899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
I0719 04:45:14.543990  472899 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:14.545588  472899 config.go:182] Loaded profile config "functional-402532": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:45:14.546183  472899 cli_runner.go:164] Run: docker container inspect functional-402532 --format={{.State.Status}}
I0719 04:45:14.565221  472899 ssh_runner.go:195] Run: systemctl --version
I0719 04:45:14.565285  472899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-402532
I0719 04:45:14.581776  472899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/functional-402532/id_rsa Username:docker}
I0719 04:45:14.670857  472899 build_images.go:161] Building image from path: /tmp/build.2171098088.tar
I0719 04:45:14.670928  472899 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 04:45:14.679837  472899 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2171098088.tar
I0719 04:45:14.683398  472899 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2171098088.tar: stat -c "%s %y" /var/lib/minikube/build/build.2171098088.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2171098088.tar': No such file or directory
I0719 04:45:14.683428  472899 ssh_runner.go:362] scp /tmp/build.2171098088.tar --> /var/lib/minikube/build/build.2171098088.tar (3072 bytes)
I0719 04:45:14.708564  472899 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2171098088
I0719 04:45:14.722186  472899 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2171098088 -xf /var/lib/minikube/build/build.2171098088.tar
I0719 04:45:14.736286  472899 crio.go:315] Building image: /var/lib/minikube/build/build.2171098088
I0719 04:45:14.736363  472899 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-402532 /var/lib/minikube/build/build.2171098088 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0719 04:45:16.798834  472899 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-402532 /var/lib/minikube/build/build.2171098088 --cgroup-manager=cgroupfs: (2.062439647s)
I0719 04:45:16.798909  472899 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2171098088
I0719 04:45:16.808023  472899 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2171098088.tar
I0719 04:45:16.817597  472899 build_images.go:217] Built localhost/my-image:functional-402532 from /tmp/build.2171098088.tar
I0719 04:45:16.817659  472899 build_images.go:133] succeeded building to: functional-402532
I0719 04:45:16.817676  472899 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-402532
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image load --daemon docker.io/kicbase/echo-server:functional-402532 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-402532 image load --daemon docker.io/kicbase/echo-server:functional-402532 --alsologtostderr: (1.265187398s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image load --daemon docker.io/kicbase/echo-server:functional-402532 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-402532
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image load --daemon docker.io/kicbase/echo-server:functional-402532 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-402532 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-402532 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-402532 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-402532 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 469507: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-402532 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-402532 apply -f testdata/testsvc.yaml
E0719 04:44:30.779661  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [25221593-8ea8-4f98-b5c9-2eee6fea95ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [25221593-8ea8-4f98-b5c9-2eee6fea95ce] Running
E0719 04:44:38.461434  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004006035s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image save docker.io/kicbase/echo-server:functional-402532 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image rm docker.io/kicbase/echo-server:functional-402532 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
E0719 04:44:33.340424  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-402532
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 image save --daemon docker.io/kicbase/echo-server:functional-402532 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-402532
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-402532 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.253.179 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-402532 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-402532 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-402532 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-jn7lk" [4253f5fd-de67-4380-9fd7-20db511cb7d8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-jn7lk" [4253f5fd-de67-4380-9fd7-20db511cb7d8] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005744533s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "428.489877ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "68.231544ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 service list -o json
functional_test.go:1490: Took "614.59539ms" to run "out/minikube-linux-arm64 -p functional-402532 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "434.574713ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "55.275439ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30361
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30361
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdany-port3938413187/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721364304606953122" to /tmp/TestFunctionalparallelMountCmdany-port3938413187/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721364304606953122" to /tmp/TestFunctionalparallelMountCmdany-port3938413187/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721364304606953122" to /tmp/TestFunctionalparallelMountCmdany-port3938413187/001/test-1721364304606953122
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (462.281125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 04:45 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 04:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 04:45 test-1721364304606953122
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh cat /mount-9p/test-1721364304606953122
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-402532 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [330d936b-bca5-4635-8191-fefd4eeb7b20] Pending
helpers_test.go:344: "busybox-mount" [330d936b-bca5-4635-8191-fefd4eeb7b20] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0719 04:45:09.182013  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
2024/07/19 04:45:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "busybox-mount" [330d936b-bca5-4635-8191-fefd4eeb7b20] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [330d936b-bca5-4635-8191-fefd4eeb7b20] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004297326s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-402532 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdany-port3938413187/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdspecific-port9952707/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (459.665128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdspecific-port9952707/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh "sudo umount -f /mount-9p": exit status 1 (258.952992ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-402532 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdspecific-port9952707/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2957826899/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2957826899/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2957826899/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T" /mount1: exit status 1 (549.380006ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-402532 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-402532 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2957826899/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2957826899/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-402532 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2957826899/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-402532
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-402532
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-402532
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-534816 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0719 04:45:50.143447  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:47:12.064168  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-534816 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m59.796121246s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (180.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-534816 -- rollout status deployment/busybox: (4.655742341s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-4mxwg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-wjsb7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-xlkf5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-4mxwg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-wjsb7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-xlkf5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-4mxwg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-wjsb7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-xlkf5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-4mxwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-4mxwg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-wjsb7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-wjsb7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-xlkf5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534816 -- exec busybox-fc5497c4f-xlkf5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-534816 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-534816 -v=7 --alsologtostderr: (34.823951281s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-534816 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp testdata/cp-test.txt ha-534816:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2475781564/001/cp-test_ha-534816.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816:/home/docker/cp-test.txt ha-534816-m02:/home/docker/cp-test_ha-534816_ha-534816-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test_ha-534816_ha-534816-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816:/home/docker/cp-test.txt ha-534816-m03:/home/docker/cp-test_ha-534816_ha-534816-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test_ha-534816_ha-534816-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816:/home/docker/cp-test.txt ha-534816-m04:/home/docker/cp-test_ha-534816_ha-534816-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test_ha-534816_ha-534816-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp testdata/cp-test.txt ha-534816-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2475781564/001/cp-test_ha-534816-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m02:/home/docker/cp-test.txt ha-534816:/home/docker/cp-test_ha-534816-m02_ha-534816.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test_ha-534816-m02_ha-534816.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m02:/home/docker/cp-test.txt ha-534816-m03:/home/docker/cp-test_ha-534816-m02_ha-534816-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test_ha-534816-m02_ha-534816-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m02:/home/docker/cp-test.txt ha-534816-m04:/home/docker/cp-test_ha-534816-m02_ha-534816-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test_ha-534816-m02_ha-534816-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp testdata/cp-test.txt ha-534816-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2475781564/001/cp-test_ha-534816-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m03:/home/docker/cp-test.txt ha-534816:/home/docker/cp-test_ha-534816-m03_ha-534816.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test_ha-534816-m03_ha-534816.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m03:/home/docker/cp-test.txt ha-534816-m02:/home/docker/cp-test_ha-534816-m03_ha-534816-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test_ha-534816-m03_ha-534816-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m03:/home/docker/cp-test.txt ha-534816-m04:/home/docker/cp-test_ha-534816-m03_ha-534816-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test_ha-534816-m03_ha-534816-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp testdata/cp-test.txt ha-534816-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2475781564/001/cp-test_ha-534816-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m04:/home/docker/cp-test.txt ha-534816:/home/docker/cp-test_ha-534816-m04_ha-534816.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816 "sudo cat /home/docker/cp-test_ha-534816-m04_ha-534816.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m04:/home/docker/cp-test.txt ha-534816-m02:/home/docker/cp-test_ha-534816-m04_ha-534816-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m02 "sudo cat /home/docker/cp-test_ha-534816-m04_ha-534816-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 cp ha-534816-m04:/home/docker/cp-test.txt ha-534816-m03:/home/docker/cp-test_ha-534816-m04_ha-534816-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 ssh -n ha-534816-m03 "sudo cat /home/docker/cp-test_ha-534816-m04_ha-534816-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 node stop m02 -v=7 --alsologtostderr
E0719 04:49:28.218163  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:49:30.695083  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:30.700355  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:30.710615  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:30.730835  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:30.771099  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:30.851474  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:31.011956  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:31.332297  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:31.973210  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:33.253400  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:35.814575  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-534816 node stop m02 -v=7 --alsologtostderr: (11.993462026s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr: exit status 7 (719.209872ms)

                                                
                                                
-- stdout --
	ha-534816
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-534816-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-534816-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-534816-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:49:39.154869  489732 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:49:39.154993  489732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:49:39.155004  489732 out.go:304] Setting ErrFile to fd 2...
	I0719 04:49:39.155009  489732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:49:39.155250  489732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:49:39.155427  489732 out.go:298] Setting JSON to false
	I0719 04:49:39.155458  489732 mustload.go:65] Loading cluster: ha-534816
	I0719 04:49:39.155553  489732 notify.go:220] Checking for updates...
	I0719 04:49:39.155862  489732 config.go:182] Loaded profile config "ha-534816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:49:39.155875  489732 status.go:255] checking status of ha-534816 ...
	I0719 04:49:39.156350  489732 cli_runner.go:164] Run: docker container inspect ha-534816 --format={{.State.Status}}
	I0719 04:49:39.182211  489732 status.go:330] ha-534816 host status = "Running" (err=<nil>)
	I0719 04:49:39.182271  489732 host.go:66] Checking if "ha-534816" exists ...
	I0719 04:49:39.182633  489732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-534816
	I0719 04:49:39.213966  489732 host.go:66] Checking if "ha-534816" exists ...
	I0719 04:49:39.214246  489732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:49:39.214295  489732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-534816
	I0719 04:49:39.235955  489732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/ha-534816/id_rsa Username:docker}
	I0719 04:49:39.324231  489732 ssh_runner.go:195] Run: systemctl --version
	I0719 04:49:39.328785  489732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:49:39.340936  489732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 04:49:39.409299  489732 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-19 04:49:39.399582324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 04:49:39.409886  489732 kubeconfig.go:125] found "ha-534816" server: "https://192.168.49.254:8443"
	I0719 04:49:39.409922  489732 api_server.go:166] Checking apiserver status ...
	I0719 04:49:39.409964  489732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:49:39.421188  489732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1392/cgroup
	I0719 04:49:39.430902  489732 api_server.go:182] apiserver freezer: "11:freezer:/docker/1b6943b13ee093872b2f55459e537eeb1a10355c7bbcc4a99ae1b949a1c93cd5/crio/crio-5dd61df7cee220a540d45dc891c7aa9b62c35d3e32edaf641fc194f34da8089a"
	I0719 04:49:39.430989  489732 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1b6943b13ee093872b2f55459e537eeb1a10355c7bbcc4a99ae1b949a1c93cd5/crio/crio-5dd61df7cee220a540d45dc891c7aa9b62c35d3e32edaf641fc194f34da8089a/freezer.state
	I0719 04:49:39.439645  489732 api_server.go:204] freezer state: "THAWED"
	I0719 04:49:39.439670  489732 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0719 04:49:39.447309  489732 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0719 04:49:39.447337  489732 status.go:422] ha-534816 apiserver status = Running (err=<nil>)
	I0719 04:49:39.447348  489732 status.go:257] ha-534816 status: &{Name:ha-534816 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:49:39.447372  489732 status.go:255] checking status of ha-534816-m02 ...
	I0719 04:49:39.447668  489732 cli_runner.go:164] Run: docker container inspect ha-534816-m02 --format={{.State.Status}}
	I0719 04:49:39.464364  489732 status.go:330] ha-534816-m02 host status = "Stopped" (err=<nil>)
	I0719 04:49:39.464385  489732 status.go:343] host is not running, skipping remaining checks
	I0719 04:49:39.464393  489732 status.go:257] ha-534816-m02 status: &{Name:ha-534816-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:49:39.464413  489732 status.go:255] checking status of ha-534816-m03 ...
	I0719 04:49:39.464713  489732 cli_runner.go:164] Run: docker container inspect ha-534816-m03 --format={{.State.Status}}
	I0719 04:49:39.481147  489732 status.go:330] ha-534816-m03 host status = "Running" (err=<nil>)
	I0719 04:49:39.481169  489732 host.go:66] Checking if "ha-534816-m03" exists ...
	I0719 04:49:39.481459  489732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-534816-m03
	I0719 04:49:39.497606  489732 host.go:66] Checking if "ha-534816-m03" exists ...
	I0719 04:49:39.497916  489732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:49:39.497961  489732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-534816-m03
	I0719 04:49:39.515014  489732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/ha-534816-m03/id_rsa Username:docker}
	I0719 04:49:39.603508  489732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:49:39.615666  489732 kubeconfig.go:125] found "ha-534816" server: "https://192.168.49.254:8443"
	I0719 04:49:39.615697  489732 api_server.go:166] Checking apiserver status ...
	I0719 04:49:39.615744  489732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:49:39.626411  489732 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup
	I0719 04:49:39.635445  489732 api_server.go:182] apiserver freezer: "11:freezer:/docker/354128da38af0f2822f998eb2be6a110bca730fdb891734d14a2093f4856f267/crio/crio-e3b73f67b9b96e43a9f942df0ff32feba99c9432eda3e2e4395e019ff6197c51"
	I0719 04:49:39.635517  489732 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/354128da38af0f2822f998eb2be6a110bca730fdb891734d14a2093f4856f267/crio/crio-e3b73f67b9b96e43a9f942df0ff32feba99c9432eda3e2e4395e019ff6197c51/freezer.state
	I0719 04:49:39.644711  489732 api_server.go:204] freezer state: "THAWED"
	I0719 04:49:39.644740  489732 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0719 04:49:39.652376  489732 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0719 04:49:39.652409  489732 status.go:422] ha-534816-m03 apiserver status = Running (err=<nil>)
	I0719 04:49:39.652420  489732 status.go:257] ha-534816-m03 status: &{Name:ha-534816-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:49:39.652463  489732 status.go:255] checking status of ha-534816-m04 ...
	I0719 04:49:39.652783  489732 cli_runner.go:164] Run: docker container inspect ha-534816-m04 --format={{.State.Status}}
	I0719 04:49:39.679890  489732 status.go:330] ha-534816-m04 host status = "Running" (err=<nil>)
	I0719 04:49:39.679919  489732 host.go:66] Checking if "ha-534816-m04" exists ...
	I0719 04:49:39.680236  489732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-534816-m04
	I0719 04:49:39.697302  489732 host.go:66] Checking if "ha-534816-m04" exists ...
	I0719 04:49:39.697641  489732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:49:39.697696  489732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-534816-m04
	I0719 04:49:39.716565  489732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/ha-534816-m04/id_rsa Username:docker}
	I0719 04:49:39.804172  489732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:49:39.818827  489732 status.go:257] ha-534816-m04 status: &{Name:ha-534816-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 node start m02 -v=7 --alsologtostderr
E0719 04:49:40.935491  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:51.176472  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:49:55.904474  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-534816 node start m02 -v=7 --alsologtostderr: (19.996169575s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr: (1.201817963s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (13.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0719 04:50:11.656692  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (13.833438353s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (13.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-534816 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-534816 -v=7 --alsologtostderr
E0719 04:50:52.617767  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-534816 -v=7 --alsologtostderr: (36.985716538s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-534816 --wait=true -v=7 --alsologtostderr
E0719 04:52:14.538424  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-534816 --wait=true -v=7 --alsologtostderr: (2m44.658432446s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-534816
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (201.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-534816 node delete m03 -v=7 --alsologtostderr: (12.47445011s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-534816 stop -v=7 --alsologtostderr: (35.578774101s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr: exit status 7 (115.406545ms)

                                                
                                                
-- stdout --
	ha-534816
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-534816-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-534816-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:54:26.909369  504352 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:54:26.909546  504352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:54:26.909574  504352 out.go:304] Setting ErrFile to fd 2...
	I0719 04:54:26.909595  504352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:54:26.909873  504352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 04:54:26.910102  504352 out.go:298] Setting JSON to false
	I0719 04:54:26.910172  504352 mustload.go:65] Loading cluster: ha-534816
	I0719 04:54:26.910244  504352 notify.go:220] Checking for updates...
	I0719 04:54:26.910676  504352 config.go:182] Loaded profile config "ha-534816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:54:26.910709  504352 status.go:255] checking status of ha-534816 ...
	I0719 04:54:26.911488  504352 cli_runner.go:164] Run: docker container inspect ha-534816 --format={{.State.Status}}
	I0719 04:54:26.929136  504352 status.go:330] ha-534816 host status = "Stopped" (err=<nil>)
	I0719 04:54:26.929156  504352 status.go:343] host is not running, skipping remaining checks
	I0719 04:54:26.929163  504352 status.go:257] ha-534816 status: &{Name:ha-534816 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:54:26.929186  504352 status.go:255] checking status of ha-534816-m02 ...
	I0719 04:54:26.929482  504352 cli_runner.go:164] Run: docker container inspect ha-534816-m02 --format={{.State.Status}}
	I0719 04:54:26.953259  504352 status.go:330] ha-534816-m02 host status = "Stopped" (err=<nil>)
	I0719 04:54:26.953279  504352 status.go:343] host is not running, skipping remaining checks
	I0719 04:54:26.953291  504352 status.go:257] ha-534816-m02 status: &{Name:ha-534816-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:54:26.953312  504352 status.go:255] checking status of ha-534816-m04 ...
	I0719 04:54:26.953616  504352 cli_runner.go:164] Run: docker container inspect ha-534816-m04 --format={{.State.Status}}
	I0719 04:54:26.979265  504352 status.go:330] ha-534816-m04 host status = "Stopped" (err=<nil>)
	I0719 04:54:26.979290  504352 status.go:343] host is not running, skipping remaining checks
	I0719 04:54:26.979298  504352 status.go:257] ha-534816-m04 status: &{Name:ha-534816-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (118.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-534816 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0719 04:54:28.217515  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:54:30.695285  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 04:54:58.378646  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-534816 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m57.899240909s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (118.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-534816 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-534816 --control-plane -v=7 --alsologtostderr: (1m10.344381745s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-534816 status -v=7 --alsologtostderr: (1.010248097s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-981314 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-981314 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m25.634127022s)
--- PASS: TestJSONOutput/start/Command (85.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-981314 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-981314 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-981314 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-981314 --output=json --user=testUser: (5.894737551s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-925188 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-925188 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.06848ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"879cbc43-d34f-4d93-83b8-2b561934bc13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-925188] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"253c1e15-ad43-47a6-8a3e-3dbdfdb9ee88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"29c638df-7856-4d68-a22b-6c87512722f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d8580f54-56b1-4ceb-bcb4-16288aae24c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig"}}
	{"specversion":"1.0","id":"e97e430d-4efd-41c8-bc00-ecbaf23a2ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube"}}
	{"specversion":"1.0","id":"9a0280ae-5d11-46f7-b2b3-ef6c817aabb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"616dc839-4cd6-4973-9294-d0b697c615c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"32e3cf68-5a23-4b4a-b14e-8c2ff7427bf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-925188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-925188
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-863062 --network=
E0719 04:59:28.217563  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 04:59:30.694548  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-863062 --network=: (36.588038551s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-863062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-863062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-863062: (2.394688081s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-009569 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-009569 --network=bridge: (31.551556896s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-009569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-009569
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-009569: (2.087867723s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.66s)

                                                
                                    
x
+
TestKicExistingNetwork (33.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-691486 --network=existing-network
E0719 05:00:51.264985  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-691486 --network=existing-network: (31.220132317s)
helpers_test.go:175: Cleaning up "existing-network-691486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-691486
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-691486: (2.064448466s)
--- PASS: TestKicExistingNetwork (33.43s)

                                                
                                    
x
+
TestKicCustomSubnet (33.05s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-625456 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-625456 --subnet=192.168.60.0/24: (30.931063422s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-625456 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-625456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-625456
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-625456: (2.098108549s)
--- PASS: TestKicCustomSubnet (33.05s)

                                                
                                    
x
+
TestKicStaticIP (34.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-527830 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-527830 --static-ip=192.168.200.200: (31.948506167s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-527830 ip
helpers_test.go:175: Cleaning up "static-ip-527830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-527830
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-527830: (2.133862002s)
--- PASS: TestKicStaticIP (34.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-710256 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-710256 --driver=docker  --container-runtime=crio: (30.907144386s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-712980 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-712980 --driver=docker  --container-runtime=crio: (30.658214122s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-710256
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-712980
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-712980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-712980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-712980: (1.944954276s)
helpers_test.go:175: Cleaning up "first-710256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-710256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-710256: (1.954824677s)
--- PASS: TestMinikubeProfile (66.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-299034 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-299034 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.904607366s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-299034 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-312211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-312211 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.453176741s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-312211 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-299034 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-299034 --alsologtostderr -v=5: (1.589979373s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-312211 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-312211
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-312211: (1.205959596s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-312211
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-312211: (7.038135394s)
--- PASS: TestMountStart/serial/RestartStopped (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-312211 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-938468 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0719 05:04:28.218056  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:04:30.695167  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-938468 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m54.198842891s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-938468 -- rollout status deployment/busybox: (3.626070301s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-2rnbg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-c78t6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-2rnbg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-c78t6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-2rnbg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-c78t6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-2rnbg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-2rnbg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-c78t6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-938468 -- exec busybox-fc5497c4f-c78t6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-938468 -v 3 --alsologtostderr
E0719 05:05:53.738797  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-938468 -v 3 --alsologtostderr: (26.054649797s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-938468 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp testdata/cp-test.txt multinode-938468:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3657525992/001/cp-test_multinode-938468.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468:/home/docker/cp-test.txt multinode-938468-m02:/home/docker/cp-test_multinode-938468_multinode-938468-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m02 "sudo cat /home/docker/cp-test_multinode-938468_multinode-938468-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468:/home/docker/cp-test.txt multinode-938468-m03:/home/docker/cp-test_multinode-938468_multinode-938468-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m03 "sudo cat /home/docker/cp-test_multinode-938468_multinode-938468-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp testdata/cp-test.txt multinode-938468-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3657525992/001/cp-test_multinode-938468-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468-m02:/home/docker/cp-test.txt multinode-938468:/home/docker/cp-test_multinode-938468-m02_multinode-938468.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468 "sudo cat /home/docker/cp-test_multinode-938468-m02_multinode-938468.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468-m02:/home/docker/cp-test.txt multinode-938468-m03:/home/docker/cp-test_multinode-938468-m02_multinode-938468-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m03 "sudo cat /home/docker/cp-test_multinode-938468-m02_multinode-938468-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp testdata/cp-test.txt multinode-938468-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3657525992/001/cp-test_multinode-938468-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468-m03:/home/docker/cp-test.txt multinode-938468:/home/docker/cp-test_multinode-938468-m03_multinode-938468.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468 "sudo cat /home/docker/cp-test_multinode-938468-m03_multinode-938468.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 cp multinode-938468-m03:/home/docker/cp-test.txt multinode-938468-m02:/home/docker/cp-test_multinode-938468-m03_multinode-938468-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 ssh -n multinode-938468-m02 "sudo cat /home/docker/cp-test_multinode-938468-m03_multinode-938468-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-938468 node stop m03: (1.210606984s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-938468 status: exit status 7 (521.433149ms)

                                                
                                                
-- stdout --
	multinode-938468
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-938468-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-938468-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr: exit status 7 (509.300422ms)

                                                
                                                
-- stdout --
	multinode-938468
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-938468-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-938468-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 05:06:31.432433  558734 out.go:291] Setting OutFile to fd 1 ...
	I0719 05:06:31.432639  558734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:06:31.432671  558734 out.go:304] Setting ErrFile to fd 2...
	I0719 05:06:31.432695  558734 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:06:31.432968  558734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 05:06:31.433175  558734 out.go:298] Setting JSON to false
	I0719 05:06:31.433248  558734 mustload.go:65] Loading cluster: multinode-938468
	I0719 05:06:31.433291  558734 notify.go:220] Checking for updates...
	I0719 05:06:31.433759  558734 config.go:182] Loaded profile config "multinode-938468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 05:06:31.433799  558734 status.go:255] checking status of multinode-938468 ...
	I0719 05:06:31.434682  558734 cli_runner.go:164] Run: docker container inspect multinode-938468 --format={{.State.Status}}
	I0719 05:06:31.454572  558734 status.go:330] multinode-938468 host status = "Running" (err=<nil>)
	I0719 05:06:31.454604  558734 host.go:66] Checking if "multinode-938468" exists ...
	I0719 05:06:31.454965  558734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938468
	I0719 05:06:31.485402  558734 host.go:66] Checking if "multinode-938468" exists ...
	I0719 05:06:31.485777  558734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 05:06:31.485822  558734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938468
	I0719 05:06:31.504435  558734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33298 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/multinode-938468/id_rsa Username:docker}
	I0719 05:06:31.592103  558734 ssh_runner.go:195] Run: systemctl --version
	I0719 05:06:31.596478  558734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:06:31.608358  558734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:06:31.669845  558734 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-19 05:06:31.656152548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 05:06:31.670476  558734 kubeconfig.go:125] found "multinode-938468" server: "https://192.168.67.2:8443"
	I0719 05:06:31.670506  558734 api_server.go:166] Checking apiserver status ...
	I0719 05:06:31.670559  558734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:06:31.681916  558734 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1404/cgroup
	I0719 05:06:31.691523  558734 api_server.go:182] apiserver freezer: "11:freezer:/docker/08df01f0f78befdc0c4648a883aa013e0f03bd203376aa5216953248a27d541a/crio/crio-3188b7d0a0e040248beb6142c662b7a52abc49dd250b27c8bcea06e1709add11"
	I0719 05:06:31.691602  558734 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/08df01f0f78befdc0c4648a883aa013e0f03bd203376aa5216953248a27d541a/crio/crio-3188b7d0a0e040248beb6142c662b7a52abc49dd250b27c8bcea06e1709add11/freezer.state
	I0719 05:06:31.700912  558734 api_server.go:204] freezer state: "THAWED"
	I0719 05:06:31.700945  558734 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0719 05:06:31.708710  558734 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0719 05:06:31.708786  558734 status.go:422] multinode-938468 apiserver status = Running (err=<nil>)
	I0719 05:06:31.708812  558734 status.go:257] multinode-938468 status: &{Name:multinode-938468 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 05:06:31.708843  558734 status.go:255] checking status of multinode-938468-m02 ...
	I0719 05:06:31.709176  558734 cli_runner.go:164] Run: docker container inspect multinode-938468-m02 --format={{.State.Status}}
	I0719 05:06:31.728482  558734 status.go:330] multinode-938468-m02 host status = "Running" (err=<nil>)
	I0719 05:06:31.728527  558734 host.go:66] Checking if "multinode-938468-m02" exists ...
	I0719 05:06:31.728913  558734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-938468-m02
	I0719 05:06:31.746057  558734 host.go:66] Checking if "multinode-938468-m02" exists ...
	I0719 05:06:31.746361  558734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 05:06:31.746402  558734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-938468-m02
	I0719 05:06:31.763588  558734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33303 SSHKeyPath:/home/jenkins/minikube-integration/19302-437615/.minikube/machines/multinode-938468-m02/id_rsa Username:docker}
	I0719 05:06:31.851693  558734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:06:31.864103  558734 status.go:257] multinode-938468-m02 status: &{Name:multinode-938468-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 05:06:31.864143  558734 status.go:255] checking status of multinode-938468-m03 ...
	I0719 05:06:31.864450  558734 cli_runner.go:164] Run: docker container inspect multinode-938468-m03 --format={{.State.Status}}
	I0719 05:06:31.886142  558734 status.go:330] multinode-938468-m03 host status = "Stopped" (err=<nil>)
	I0719 05:06:31.886166  558734 status.go:343] host is not running, skipping remaining checks
	I0719 05:06:31.886174  558734 status.go:257] multinode-938468-m03 status: &{Name:multinode-938468-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-938468 node start m03 -v=7 --alsologtostderr: (9.265910873s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-938468
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-938468
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-938468: (24.81379719s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-938468 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-938468 --wait=true -v=8 --alsologtostderr: (1m4.678959375s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-938468
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-938468 node delete m03: (4.557490755s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-938468 stop: (23.688039409s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-938468 status: exit status 7 (92.600501ms)

                                                
                                                
-- stdout --
	multinode-938468
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-938468-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr: exit status 7 (91.2279ms)

                                                
                                                
-- stdout --
	multinode-938468
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-938468-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 05:08:40.578019  566180 out.go:291] Setting OutFile to fd 1 ...
	I0719 05:08:40.578227  566180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:08:40.578254  566180 out.go:304] Setting ErrFile to fd 2...
	I0719 05:08:40.578272  566180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:08:40.578566  566180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 05:08:40.578774  566180 out.go:298] Setting JSON to false
	I0719 05:08:40.578833  566180 mustload.go:65] Loading cluster: multinode-938468
	I0719 05:08:40.578923  566180 notify.go:220] Checking for updates...
	I0719 05:08:40.579282  566180 config.go:182] Loaded profile config "multinode-938468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 05:08:40.579316  566180 status.go:255] checking status of multinode-938468 ...
	I0719 05:08:40.580099  566180 cli_runner.go:164] Run: docker container inspect multinode-938468 --format={{.State.Status}}
	I0719 05:08:40.597238  566180 status.go:330] multinode-938468 host status = "Stopped" (err=<nil>)
	I0719 05:08:40.597258  566180 status.go:343] host is not running, skipping remaining checks
	I0719 05:08:40.597266  566180 status.go:257] multinode-938468 status: &{Name:multinode-938468 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 05:08:40.597301  566180 status.go:255] checking status of multinode-938468-m02 ...
	I0719 05:08:40.597599  566180 cli_runner.go:164] Run: docker container inspect multinode-938468-m02 --format={{.State.Status}}
	I0719 05:08:40.619581  566180 status.go:330] multinode-938468-m02 host status = "Stopped" (err=<nil>)
	I0719 05:08:40.619603  566180 status.go:343] host is not running, skipping remaining checks
	I0719 05:08:40.619609  566180 status.go:257] multinode-938468-m02 status: &{Name:multinode-938468-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (61.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-938468 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0719 05:09:28.218218  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:09:30.694851  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-938468 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m0.315973136s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-938468 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (61.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-938468
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-938468-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-938468-m02 --driver=docker  --container-runtime=crio: exit status 14 (80.594924ms)

                                                
                                                
-- stdout --
	* [multinode-938468-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-938468-m02' is duplicated with machine name 'multinode-938468-m02' in profile 'multinode-938468'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-938468-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-938468-m03 --driver=docker  --container-runtime=crio: (30.762014423s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-938468
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-938468: exit status 80 (324.840867ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-938468 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-938468-m03 already exists in multinode-938468-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-938468-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-938468-m03: (1.93987713s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.16s)

                                                
                                    
x
+
TestPreload (124.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-771214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-771214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m33.190377684s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-771214 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-771214 image pull gcr.io/k8s-minikube/busybox: (1.82753197s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-771214
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-771214: (5.749500992s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-771214 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-771214 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.253271544s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-771214 image list
helpers_test.go:175: Cleaning up "test-preload-771214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-771214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-771214: (2.339841343s)
--- PASS: TestPreload (124.72s)

                                                
                                    
x
+
TestScheduledStopUnix (107.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-584936 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-584936 --memory=2048 --driver=docker  --container-runtime=crio: (30.701644469s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-584936 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-584936 -n scheduled-stop-584936
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-584936 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-584936 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-584936 -n scheduled-stop-584936
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-584936
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-584936 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-584936
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-584936: exit status 7 (64.169595ms)

                                                
                                                
-- stdout --
	scheduled-stop-584936
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-584936 -n scheduled-stop-584936
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-584936 -n scheduled-stop-584936: exit status 7 (65.512056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-584936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-584936
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-584936: (5.643532413s)
--- PASS: TestScheduledStopUnix (107.83s)

                                                
                                    
x
+
TestInsufficientStorage (10.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-075751 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-075751 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.927988282s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b33891b-fd76-4da3-9359-56344e652c48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-075751] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f10638e-eb54-4bab-aac5-9380069509de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"0a098097-dad0-43ef-a509-5b050f2922b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dba80696-8b88-4cbf-a4ee-5f2caa772607","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig"}}
	{"specversion":"1.0","id":"6272c4c6-ffc2-4a71-9726-101821535fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube"}}
	{"specversion":"1.0","id":"8620344e-5550-49e6-8f75-6a5aab0dbe51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4025c43c-9c15-4a0f-860a-337e70885ead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f8286d69-6770-497b-bd06-60315bb3362d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1e3a82f3-9860-46cf-8341-aa24b91ea252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4fb23802-6a57-4725-9752-a80845560c91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0f00d92-3dec-4d35-828c-9cc52aabfdef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c166a9c2-78f6-480f-b78b-219ec5cc1669","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-075751\" primary control-plane node in \"insufficient-storage-075751\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4067ca95-0d12-4817-89ba-47b1e8c4bd12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721324606-19298 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2ba350c-457d-4207-9e3d-071c8f268ee3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d3ed2b1-d300-4282-b954-71e75713ba93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-075751 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-075751 --output=json --layout=cluster: exit status 7 (276.106267ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-075751","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-075751","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 05:14:19.605760  583842 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-075751" does not appear in /home/jenkins/minikube-integration/19302-437615/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-075751 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-075751 --output=json --layout=cluster: exit status 7 (273.246458ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-075751","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-075751","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 05:14:19.880200  583901 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-075751" does not appear in /home/jenkins/minikube-integration/19302-437615/kubeconfig
	E0719 05:14:19.890556  583901 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/insufficient-storage-075751/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-075751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-075751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-075751: (1.952349163s)
--- PASS: TestInsufficientStorage (10.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2895150423 start -p running-upgrade-925470 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2895150423 start -p running-upgrade-925470 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.214116929s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-925470 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0719 05:24:28.217898  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:24:30.695864  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-925470 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.364368049s)
helpers_test.go:175: Cleaning up "running-upgrade-925470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-925470
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-925470: (2.650755919s)
--- PASS: TestRunningBinaryUpgrade (70.34s)

                                                
                                    
x
+
TestKubernetesUpgrade (382.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.683944663s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-094733
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-094733: (1.230616397s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-094733 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-094733 status --format={{.Host}}: exit status 7 (70.639712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0719 05:19:28.217671  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:19:30.694487  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.765216831s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-094733 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (147.384009ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-094733] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-094733
	    minikube start -p kubernetes-upgrade-094733 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0947332 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-094733 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-094733 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.747117204s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-094733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-094733
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-094733: (2.666872193s)
--- PASS: TestKubernetesUpgrade (382.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.45s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3369335813 start -p missing-upgrade-310817 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3369335813 start -p missing-upgrade-310817 --memory=2200 --driver=docker  --container-runtime=crio: (1m25.688133438s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-310817
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-310817: (10.489905182s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-310817
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-310817 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0719 05:22:33.739616  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-310817 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.08696799s)
helpers_test.go:175: Cleaning up "missing-upgrade-310817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-310817
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-310817: (2.033580734s)
--- PASS: TestMissingContainerUpgrade (164.45s)

                                                
                                    
x
+
TestPause/serial/Start (97.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-762332 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-762332 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m37.042923581s)
--- PASS: TestPause/serial/Start (97.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-862814 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-862814 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (105.109003ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-862814] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-862814 --driver=docker  --container-runtime=crio
E0719 05:14:28.217660  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:14:30.695302  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-862814 --driver=docker  --container-runtime=crio: (42.961997568s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-862814 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-862814 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-862814 --no-kubernetes --driver=docker  --container-runtime=crio: (11.550360074s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-862814 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-862814 status -o json: exit status 2 (316.875061ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-862814","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-862814
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-862814: (2.008994339s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-862814 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-862814 --no-kubernetes --driver=docker  --container-runtime=crio: (6.08447525s)
--- PASS: TestNoKubernetes/serial/Start (6.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-862814 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-862814 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.88946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-862814
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-862814: (1.227468218s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-862814 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-862814 --driver=docker  --container-runtime=crio: (7.744466911s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-862814 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-862814 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.156762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-563393 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-563393 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (185.945584ms)

                                                
                                                
-- stdout --
	* [false-563393] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 05:15:40.985403  594162 out.go:291] Setting OutFile to fd 1 ...
	I0719 05:15:40.985556  594162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:15:40.985579  594162 out.go:304] Setting ErrFile to fd 2...
	I0719 05:15:40.985599  594162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:15:40.985857  594162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-437615/.minikube/bin
	I0719 05:15:40.986283  594162 out.go:298] Setting JSON to false
	I0719 05:15:40.987356  594162 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":10686,"bootTime":1721355455,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0719 05:15:40.987457  594162 start.go:139] virtualization:  
	I0719 05:15:40.991420  594162 out.go:177] * [false-563393] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0719 05:15:40.994023  594162 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:15:40.994158  594162 notify.go:220] Checking for updates...
	I0719 05:15:40.998718  594162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:15:41.002961  594162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-437615/kubeconfig
	I0719 05:15:41.005752  594162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-437615/.minikube
	I0719 05:15:41.008091  594162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0719 05:15:41.010050  594162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:15:41.012626  594162 config.go:182] Loaded profile config "pause-762332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 05:15:41.012786  594162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:15:41.037001  594162 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0719 05:15:41.037172  594162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0719 05:15:41.105794  594162 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-19 05:15:41.096169628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0719 05:15:41.105914  594162 docker.go:307] overlay module found
	I0719 05:15:41.108272  594162 out.go:177] * Using the docker driver based on user configuration
	I0719 05:15:41.110619  594162 start.go:297] selected driver: docker
	I0719 05:15:41.110653  594162 start.go:901] validating driver "docker" against <nil>
	I0719 05:15:41.110670  594162 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:15:41.113389  594162 out.go:177] 
	W0719 05:15:41.116002  594162 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0719 05:15:41.118213  594162 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-563393 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-563393" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 05:15:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-762332
contexts:
- context:
cluster: pause-762332
extensions:
- extension:
last-update: Fri, 19 Jul 2024 05:15:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-762332
name: pause-762332
current-context: pause-762332
kind: Config
preferences: {}
users:
- name: pause-762332
user:
client-certificate: /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/pause-762332/client.crt
client-key: /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/pause-762332/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-563393

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-563393"

                                                
                                                
----------------------- debugLogs end: false-563393 [took: 3.300993218s] --------------------------------
helpers_test.go:175: Cleaning up "false-563393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-563393
--- PASS: TestNetworkPlugins/group/false (3.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.87s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-762332 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-762332 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.852416386s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.87s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-762332 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-762332 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-762332 --output=json --layout=cluster: exit status 2 (395.22103ms)

                                                
                                                
-- stdout --
	{"Name":"pause-762332","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-762332","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.39s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-762332 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-762332 --alsologtostderr -v=5: (1.389387788s)
--- PASS: TestPause/serial/Unpause (1.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.33s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-762332 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-762332 --alsologtostderr -v=5: (1.331852484s)
--- PASS: TestPause/serial/PauseAgain (1.33s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-762332 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-762332 --alsologtostderr -v=5: (3.04414589s)
--- PASS: TestPause/serial/DeletePaused (3.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-762332
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-762332: exit status 1 (15.871112ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-762332: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (86.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.570838869 start -p stopped-upgrade-850815 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.570838869 start -p stopped-upgrade-850815 --memory=2200 --vm-driver=docker  --container-runtime=crio: (45.691517414s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.570838869 -p stopped-upgrade-850815 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.570838869 -p stopped-upgrade-850815 stop: (4.151236055s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-850815 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-850815 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.687553043s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (86.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m39.087306714s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-850815
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-850815: (1.01782024s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m32.474640436s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-563393 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-563393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pr4qm" [60708b9a-1616-45a8-85ad-d5035a14039b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pr4qm" [60708b9a-1616-45a8-85ad-d5035a14039b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004057193s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-563393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.706337721s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-z98c9" [f1fb60b5-757d-4228-b1f0-db643ac3d76a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004672544s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-563393 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-563393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tg8gm" [209d7850-a0be-479e-8a09-3beed409560c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tg8gm" [209d7850-a0be-479e-8a09-3beed409560c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003756697s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-563393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.885733621s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vbgh4" [4778c8a8-abac-454c-9123-cbfa795a7bd8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005564385s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-563393 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-563393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hdp45" [1e4ba4d8-bbf0-4447-9b70-2391c5fa59ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hdp45" [1e4ba4d8-bbf0-4447-9b70-2391c5fa59ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004245734s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-563393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (46.505661613s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-563393 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-563393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-g6hvf" [0d638123-06c6-42bf-a8f7-e7c18059da14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-g6hvf" [0d638123-06c6-42bf-a8f7-e7c18059da14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003437243s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-563393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-563393 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-563393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7dwjl" [5ca59fed-a9b1-4701-a827-5901c2280d10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7dwjl" [5ca59fed-a9b1-4701-a827-5901c2280d10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004227998s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m14.473066946s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-563393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-563393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m30.138048299s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-w65j8" [6b9bf6f7-2137-4561-825a-15069778bf9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003622649s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-563393 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-563393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2d5bx" [f624dd60-b250-46d7-a44e-4ec6811a469e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2d5bx" [f624dd60-b250-46d7-a44e-4ec6811a469e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00419365s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-563393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (184.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-840681 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-840681 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m4.413446822s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (184.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-563393 "pgrep -a kubelet"
E0719 05:31:48.780658  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/auto-563393/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-563393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2fzdh" [2e9b93fa-c8e5-4528-8949-4e626d6bf324] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2fzdh" [2e9b93fa-c8e5-4528-8949-4e626d6bf324] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005573611s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-563393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-563393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-570339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0719 05:32:29.911639  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/kindnet-563393/client.crt: no such file or directory
E0719 05:32:50.221922  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/auto-563393/client.crt: no such file or directory
E0719 05:32:50.392344  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/kindnet-563393/client.crt: no such file or directory
E0719 05:33:10.567536  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:10.572833  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:10.583182  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:10.603604  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:10.644334  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:10.724791  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:10.885211  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:11.205441  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:11.846114  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:13.126540  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:15.686991  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:20.807349  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:31.047868  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:33:31.353346  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/kindnet-563393/client.crt: no such file or directory
E0719 05:33:51.528381  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-570339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m34.776482225s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-570339 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa64a7da-22c6-428c-bc66-7a41344847c0] Pending
helpers_test.go:344: "busybox" [fa64a7da-22c6-428c-bc66-7a41344847c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa64a7da-22c6-428c-bc66-7a41344847c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003819826s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-570339 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-570339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-570339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020235432s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-570339 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-570339 --alsologtostderr -v=3
E0719 05:34:10.629092  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:10.634346  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:10.644593  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:10.664903  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:10.705703  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:10.786139  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:10.946609  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:11.266044  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:34:11.267174  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:11.908064  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:12.142509  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/auto-563393/client.crt: no such file or directory
E0719 05:34:13.188750  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:15.749040  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-570339 --alsologtostderr -v=3: (12.090732553s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339: exit status 7 (64.258075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-570339 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-570339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0719 05:34:20.869776  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:28.218110  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:34:30.695104  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 05:34:31.110808  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:34:32.489414  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:34:42.881551  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:42.886870  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:42.897125  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:42.917408  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:42.957701  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:43.038031  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:43.198453  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:43.518909  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-570339 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m37.452112989s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (277.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-840681 create -f testdata/busybox.yaml
E0719 05:34:44.159720  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f2c33fe0-36b9-440e-b948-9ce3ff4c64dd] Pending
E0719 05:34:45.440200  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f2c33fe0-36b9-440e-b948-9ce3ff4c64dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f2c33fe0-36b9-440e-b948-9ce3ff4c64dd] Running
E0719 05:34:48.002519  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:51.591323  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004500065s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-840681 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-840681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0719 05:34:53.123143  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:34:53.274266  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/kindnet-563393/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-840681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.237048679s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-840681 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-840681 --alsologtostderr -v=3
E0719 05:35:03.363421  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-840681 --alsologtostderr -v=3: (12.655872254s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-840681 -n old-k8s-version-840681
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-840681 -n old-k8s-version-840681: exit status 7 (67.764497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-840681 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-840681 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0719 05:35:23.844349  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:35:32.551728  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:35:54.409981  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:36:00.616775  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:00.622113  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:00.632460  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:00.652775  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:00.693040  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:00.773378  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:00.933834  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:01.254529  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:01.895363  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:03.176366  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:04.804995  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:36:05.736582  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:10.857381  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:21.098555  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:28.299077  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/auto-563393/client.crt: no such file or directory
E0719 05:36:41.579176  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:36:49.508901  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:49.514171  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:49.524443  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:49.544785  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:49.585100  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:49.665396  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:49.826402  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:50.147330  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:50.787528  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:52.067715  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:54.472211  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
E0719 05:36:54.628359  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:36:55.983197  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/auto-563393/client.crt: no such file or directory
E0719 05:36:59.749168  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:37:09.430990  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/kindnet-563393/client.crt: no such file or directory
E0719 05:37:09.990243  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:37:22.539503  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:37:26.725123  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-840681 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m19.781215015s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-840681 -n old-k8s-version-840681
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d8sb7" [7b54a9f9-596d-4425-83bc-327a900bdaaa] Running
E0719 05:37:30.470425  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00330884s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d8sb7" [7b54a9f9-596d-4425-83bc-327a900bdaaa] Running
E0719 05:37:37.114835  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/kindnet-563393/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00394744s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-840681 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-840681 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-840681 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-840681 -n old-k8s-version-840681
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-840681 -n old-k8s-version-840681: exit status 2 (299.439569ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-840681 -n old-k8s-version-840681
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-840681 -n old-k8s-version-840681: exit status 2 (320.976573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-840681 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-840681 -n old-k8s-version-840681
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-840681 -n old-k8s-version-840681
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-326153 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0719 05:38:10.567492  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:38:11.430654  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:38:38.250396  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:38:44.460224  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-326153 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (1m29.459700276s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-nx724" [5dfcec5d-9640-4145-b989-0fd50c68d940] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003201448s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-nx724" [5dfcec5d-9640-4145-b989-0fd50c68d940] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003736766s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-570339 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-570339 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-570339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339
E0719 05:39:10.629367  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339: exit status 2 (315.115383ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339: exit status 2 (329.404886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-570339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-570339 -n default-k8s-diff-port-570339
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-326153 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7e582c70-9090-49a9-b185-3912e6116a4d] Pending
helpers_test.go:344: "busybox" [7e582c70-9090-49a9-b185-3912e6116a4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7e582c70-9090-49a9-b185-3912e6116a4d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.006639636s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-326153 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-902972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-902972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m6.436590005s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-326153 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-326153 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.213984749s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-326153 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-326153 --alsologtostderr -v=3
E0719 05:39:28.218267  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
E0719 05:39:30.695128  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/functional-402532/client.crt: no such file or directory
E0719 05:39:33.350867  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:39:38.312880  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-326153 --alsologtostderr -v=3: (13.812155294s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326153 -n embed-certs-326153
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326153 -n embed-certs-326153: exit status 7 (114.282505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-326153 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (275.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-326153 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3
E0719 05:39:42.881134  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:39:44.451185  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:44.457295  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:44.468317  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:44.490496  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:44.530723  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:44.611041  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:44.771502  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:45.092156  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:45.733125  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:47.014276  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:49.575179  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:39:54.695610  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:40:04.936461  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:40:10.566074  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-326153 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.3: (4m35.449541443s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-326153 -n embed-certs-326153
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (275.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-902972 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e978e4d5-7fe9-46f8-871e-e5390e2416dc] Pending
helpers_test.go:344: "busybox" [e978e4d5-7fe9-46f8-871e-e5390e2416dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0719 05:40:25.416609  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e978e4d5-7fe9-46f8-871e-e5390e2416dc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005232045s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-902972 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-902972 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-902972 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-902972 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-902972 --alsologtostderr -v=3: (11.962696951s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-902972 -n no-preload-902972
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-902972 -n no-preload-902972: exit status 7 (74.070714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-902972 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-902972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0719 05:41:00.616766  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:41:06.377515  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:41:28.298973  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/auto-563393/client.crt: no such file or directory
E0719 05:41:28.301249  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/flannel-563393/client.crt: no such file or directory
E0719 05:41:49.509520  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:42:09.430588  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/kindnet-563393/client.crt: no such file or directory
E0719 05:42:17.191647  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/bridge-563393/client.crt: no such file or directory
E0719 05:42:28.297790  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:43:10.566822  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/calico-563393/client.crt: no such file or directory
E0719 05:43:58.986381  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:43:58.991707  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:43:59.002548  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:43:59.022983  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:43:59.063255  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:43:59.143595  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:43:59.303937  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:43:59.624082  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:44:00.264943  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:44:01.545800  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:44:04.106856  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:44:09.228077  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:44:10.628820  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/custom-flannel-563393/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-902972 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (4m28.497936536s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-902972 -n no-preload-902972
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hs685" [be26bd00-c185-4fc7-af3f-e956115718f5] Running
E0719 05:44:19.468818  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003600998s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hs685" [be26bd00-c185-4fc7-af3f-e956115718f5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003223955s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-326153 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-326153 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-326153 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326153 -n embed-certs-326153
E0719 05:44:28.217606  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/addons-014077/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326153 -n embed-certs-326153: exit status 2 (312.972022ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-326153 -n embed-certs-326153
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-326153 -n embed-certs-326153: exit status 2 (310.072314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-326153 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-326153 -n embed-certs-326153
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-326153 -n embed-certs-326153
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-918176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0719 05:44:39.949425  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
E0719 05:44:42.881182  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/enable-default-cni-563393/client.crt: no such file or directory
E0719 05:44:44.451160  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
E0719 05:45:12.138636  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/old-k8s-version-840681/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-918176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (41.63244046s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-lhh69" [c354c320-9ab3-4ebf-87e7-9310cc523e37] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010209092s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-918176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-918176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.209011592s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-918176 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-918176 --alsologtostderr -v=3: (1.293569614s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-918176 -n newest-cni-918176
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-918176 -n newest-cni-918176: exit status 7 (74.43093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-918176 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-918176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-918176 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (17.783530591s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-918176 -n newest-cni-918176
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-lhh69" [c354c320-9ab3-4ebf-87e7-9310cc523e37] Running
E0719 05:45:20.910419  443154 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/default-k8s-diff-port-570339/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004913619s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-902972 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-902972 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-902972 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-902972 --alsologtostderr -v=1: (1.143504753s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-902972 -n no-preload-902972
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-902972 -n no-preload-902972: exit status 2 (580.476389ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-902972 -n no-preload-902972
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-902972 -n no-preload-902972: exit status 2 (516.693343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-902972 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-902972 --alsologtostderr -v=1: (1.174151807s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-902972 -n no-preload-902972
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-902972 -n no-preload-902972
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-918176 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-918176 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-918176 -n newest-cni-918176
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-918176 -n newest-cni-918176: exit status 2 (300.059894ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-918176 -n newest-cni-918176
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-918176 -n newest-cni-918176: exit status 2 (304.823357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-918176 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-918176 -n newest-cni-918176
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-918176 -n newest-cni-918176
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    

Test skip (33/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-335826 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-335826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-335826
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-563393 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-563393" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 05:15:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-762332
contexts:
- context:
cluster: pause-762332
extensions:
- extension:
last-update: Fri, 19 Jul 2024 05:15:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-762332
name: pause-762332
current-context: pause-762332
kind: Config
preferences: {}
users:
- name: pause-762332
user:
client-certificate: /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/pause-762332/client.crt
client-key: /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/pause-762332/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-563393

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-563393"

                                                
                                                
----------------------- debugLogs end: kubenet-563393 [took: 3.312868988s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-563393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-563393
--- SKIP: TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-563393 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-563393" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-437615/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 05:15:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-762332
contexts:
- context:
cluster: pause-762332
extensions:
- extension:
last-update: Fri, 19 Jul 2024 05:15:14 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-762332
name: pause-762332
current-context: pause-762332
kind: Config
preferences: {}
users:
- name: pause-762332
user:
client-certificate: /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/pause-762332/client.crt
client-key: /home/jenkins/minikube-integration/19302-437615/.minikube/profiles/pause-762332/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-563393

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-563393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563393"

                                                
                                                
----------------------- debugLogs end: cilium-563393 [took: 3.829293066s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-563393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-563393
--- SKIP: TestNetworkPlugins/group/cilium (3.99s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-901208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-901208
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard