Test Report: Docker_Linux_crio 19265

                    
                      4b25178fc7513411450a4d543cff32ee34a2d14b:2024-07-17:35370
                    
                

Test fail (2/336)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.56
41 TestAddons/parallel/MetricsServer 307.19
x
+
TestAddons/parallel/Ingress (154.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-957510 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-957510 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-957510 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b5b527ac-7aab-49fa-970a-f8d44e34b681] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b5b527ac-7aab-49fa-970a-f8d44e34b681] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003269484s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-957510 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.832679993s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-957510 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-957510 addons disable ingress --alsologtostderr -v=1: (7.606825159s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-957510
helpers_test.go:235: (dbg) docker inspect addons-957510:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1",
	        "Created": "2024-07-17T00:05:12.056413268Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21981,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T00:05:12.189522438Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8e13c0121d32d5213820fd1c1408d440c10e972c9e29d75579ef86b050a145b3",
	        "ResolvConfPath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/hosts",
	        "LogPath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1-json.log",
	        "Name": "/addons-957510",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-957510:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-957510",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd-init/diff:/var/lib/docker/overlay2/bb7af9236849a801cb258b267ec61d57df411fd5cfaae48b7e138223f703f6dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-957510",
	                "Source": "/var/lib/docker/volumes/addons-957510/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-957510",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-957510",
	                "name.minikube.sigs.k8s.io": "addons-957510",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b163bd16f9d16f1ee01bfa65f772cad11a58d813969ba8e4e371703d8d58c98e",
	            "SandboxKey": "/var/run/docker/netns/b163bd16f9d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-957510": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "dc4d896cb023151875263d302f8a87f1c988b74f80c5bcec5ccaaa0ab83c7bdb",
	                    "EndpointID": "c132e65989466166285b8b470af8c94735ab9e2c922cfc679448e91546b7b799",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-957510",
	                        "6f98c2cd701a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-957510 -n addons-957510
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-957510 logs -n 25: (1.167627985s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-659232                                                                     | download-only-659232   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-110186                                                                     | download-only-110186   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-874175                                                                     | download-only-874175   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | --download-only -p                                                                          | download-docker-079405 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | download-docker-079405                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-079405                                                                   | download-docker-079405 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-733705   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | binary-mirror-733705                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36213                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-733705                                                                     | binary-mirror-733705   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| addons  | enable dashboard -p                                                                         | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-957510 --wait=true                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | -p addons-957510                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-957510 ssh cat                                                                       | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | /opt/local-path-provisioner/pvc-7a2029d1-4210-4ea3-8f80-a2f46d6b3dac_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| ip      | addons-957510 ip                                                                            | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | -p addons-957510                                                                            |                        |         |         |                     |                     |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-957510 ssh curl -s                                                                   | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-957510 addons                                                                        | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-957510 addons                                                                        | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-957510 ip                                                                            | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:11 UTC | 17 Jul 24 00:11 UTC |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:11 UTC | 17 Jul 24 00:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:11 UTC | 17 Jul 24 00:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:50.010945   21245 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:50.011090   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:50.011103   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:50.011109   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:50.011336   21245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:04:50.011993   21245 out.go:298] Setting JSON to false
	I0717 00:04:50.012938   21245 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2837,"bootTime":1721171853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:50.013000   21245 start.go:139] virtualization: kvm guest
	I0717 00:04:50.015281   21245 out.go:177] * [addons-957510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:50.017446   21245 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:04:50.017471   21245 notify.go:220] Checking for updates...
	I0717 00:04:50.020464   21245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:50.022122   21245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:04:50.023539   21245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:04:50.024988   21245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:04:50.026322   21245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:04:50.027842   21245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:50.048290   21245 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:04:50.048412   21245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:50.094091   21245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:04:50.085501517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:50.094194   21245 docker.go:307] overlay module found
	I0717 00:04:50.095971   21245 out.go:177] * Using the docker driver based on user configuration
	I0717 00:04:50.097177   21245 start.go:297] selected driver: docker
	I0717 00:04:50.097191   21245 start.go:901] validating driver "docker" against <nil>
	I0717 00:04:50.097200   21245 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:04:50.097944   21245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:50.142967   21245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:04:50.134453498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:50.143121   21245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:50.143309   21245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:04:50.145135   21245 out.go:177] * Using Docker driver with root privileges
	I0717 00:04:50.146515   21245 cni.go:84] Creating CNI manager for ""
	I0717 00:04:50.146531   21245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:04:50.146543   21245 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:50.146612   21245 start.go:340] cluster config:
	{Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:50.148072   21245 out.go:177] * Starting "addons-957510" primary control-plane node in "addons-957510" cluster
	I0717 00:04:50.149424   21245 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:04:50.150683   21245 out.go:177] * Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:04:50.151998   21245 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:50.152027   21245 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:04:50.152042   21245 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:50.152054   21245 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:50.152177   21245 preload.go:172] Found /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:04:50.152189   21245 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:04:50.152542   21245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/config.json ...
	I0717 00:04:50.152565   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/config.json: {Name:mka71b6e573dc07c21b369acac427de301799e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:04:50.167697   21245 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:04:50.167829   21245 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:04:50.167850   21245 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:04:50.167859   21245 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:04:50.167869   21245 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:04:50.167896   21245 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from local cache
	I0717 00:05:03.377041   21245 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from cached tarball
	I0717 00:05:03.377078   21245 cache.go:194] Successfully downloaded all kic artifacts
	I0717 00:05:03.377135   21245 start.go:360] acquireMachinesLock for addons-957510: {Name:mk80820d022b2d12c4a1887cc77d38b1c4a0f210 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:05:03.377238   21245 start.go:364] duration metric: took 84.656µs to acquireMachinesLock for "addons-957510"
	I0717 00:05:03.377259   21245 start.go:93] Provisioning new machine with config: &{Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:05:03.377330   21245 start.go:125] createHost starting for "" (driver="docker")
	I0717 00:05:03.468555   21245 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 00:05:03.468784   21245 start.go:159] libmachine.API.Create for "addons-957510" (driver="docker")
	I0717 00:05:03.468819   21245 client.go:168] LocalClient.Create starting
	I0717 00:05:03.468952   21245 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem
	I0717 00:05:03.562442   21245 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem
	I0717 00:05:03.730042   21245 cli_runner.go:164] Run: docker network inspect addons-957510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 00:05:03.746934   21245 cli_runner.go:211] docker network inspect addons-957510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 00:05:03.746999   21245 network_create.go:284] running [docker network inspect addons-957510] to gather additional debugging logs...
	I0717 00:05:03.747022   21245 cli_runner.go:164] Run: docker network inspect addons-957510
	W0717 00:05:03.762556   21245 cli_runner.go:211] docker network inspect addons-957510 returned with exit code 1
	I0717 00:05:03.762586   21245 network_create.go:287] error running [docker network inspect addons-957510]: docker network inspect addons-957510: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-957510 not found
	I0717 00:05:03.762601   21245 network_create.go:289] output of [docker network inspect addons-957510]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-957510 not found
	
	** /stderr **
	I0717 00:05:03.762693   21245 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:05:03.778988   21245 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b09bd0}
	I0717 00:05:03.779054   21245 network_create.go:124] attempt to create docker network addons-957510 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 00:05:03.779120   21245 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-957510 addons-957510
	I0717 00:05:04.109420   21245 network_create.go:108] docker network addons-957510 192.168.49.0/24 created
	I0717 00:05:04.109453   21245 kic.go:121] calculated static IP "192.168.49.2" for the "addons-957510" container
	I0717 00:05:04.109511   21245 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 00:05:04.124860   21245 cli_runner.go:164] Run: docker volume create addons-957510 --label name.minikube.sigs.k8s.io=addons-957510 --label created_by.minikube.sigs.k8s.io=true
	I0717 00:05:04.223797   21245 oci.go:103] Successfully created a docker volume addons-957510
	I0717 00:05:04.223896   21245 cli_runner.go:164] Run: docker run --rm --name addons-957510-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-957510 --entrypoint /usr/bin/test -v addons-957510:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib
	I0717 00:05:07.253702   21245 cli_runner.go:217] Completed: docker run --rm --name addons-957510-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-957510 --entrypoint /usr/bin/test -v addons-957510:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib: (3.029750681s)
	I0717 00:05:07.253729   21245 oci.go:107] Successfully prepared a docker volume addons-957510
	I0717 00:05:07.253749   21245 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:07.253775   21245 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 00:05:07.253829   21245 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-957510:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 00:05:11.990553   21245 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-957510:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir: (4.736690066s)
	I0717 00:05:11.990580   21245 kic.go:203] duration metric: took 4.736802613s to extract preloaded images to volume ...
	W0717 00:05:11.990708   21245 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 00:05:11.990835   21245 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 00:05:12.039324   21245 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-957510 --name addons-957510 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-957510 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-957510 --network addons-957510 --ip 192.168.49.2 --volume addons-957510:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c
	I0717 00:05:12.355099   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Running}}
	I0717 00:05:12.372938   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:12.392398   21245 cli_runner.go:164] Run: docker exec addons-957510 stat /var/lib/dpkg/alternatives/iptables
	I0717 00:05:12.435664   21245 oci.go:144] the created container "addons-957510" has a running status.
	I0717 00:05:12.435708   21245 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa...
	I0717 00:05:12.777218   21245 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 00:05:12.796294   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:12.814623   21245 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 00:05:12.814641   21245 kic_runner.go:114] Args: [docker exec --privileged addons-957510 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 00:05:12.862621   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:12.882016   21245 machine.go:94] provisionDockerMachine start ...
	I0717 00:05:12.882141   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:12.901768   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:12.902045   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:12.902068   21245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:05:13.039331   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-957510
	
	I0717 00:05:13.039358   21245 ubuntu.go:169] provisioning hostname "addons-957510"
	I0717 00:05:13.039427   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.057879   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:13.058051   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:13.058067   21245 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-957510 && echo "addons-957510" | sudo tee /etc/hostname
	I0717 00:05:13.186294   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-957510
	
	I0717 00:05:13.186364   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.202822   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:13.203038   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:13.203055   21245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-957510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-957510/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-957510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:05:13.320085   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:05:13.320113   21245 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12715/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12715/.minikube}
	I0717 00:05:13.320135   21245 ubuntu.go:177] setting up certificates
	I0717 00:05:13.320144   21245 provision.go:84] configureAuth start
	I0717 00:05:13.320189   21245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-957510
	I0717 00:05:13.336760   21245 provision.go:143] copyHostCerts
	I0717 00:05:13.336824   21245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12715/.minikube/ca.pem (1082 bytes)
	I0717 00:05:13.336933   21245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12715/.minikube/cert.pem (1123 bytes)
	I0717 00:05:13.336986   21245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12715/.minikube/key.pem (1679 bytes)
	I0717 00:05:13.337033   21245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12715/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca-key.pem org=jenkins.addons-957510 san=[127.0.0.1 192.168.49.2 addons-957510 localhost minikube]
	I0717 00:05:13.397464   21245 provision.go:177] copyRemoteCerts
	I0717 00:05:13.397516   21245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:05:13.397561   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.414687   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:13.504302   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:05:13.525684   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:05:13.547355   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:05:13.568924   21245 provision.go:87] duration metric: took 248.768454ms to configureAuth
	I0717 00:05:13.568948   21245 ubuntu.go:193] setting minikube options for container-runtime
	I0717 00:05:13.569130   21245 config.go:182] Loaded profile config "addons-957510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:13.569236   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.585863   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:13.586033   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:13.586050   21245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:05:13.791805   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:05:13.791838   21245 machine.go:97] duration metric: took 909.784997ms to provisionDockerMachine
	I0717 00:05:13.791851   21245 client.go:171] duration metric: took 10.323025732s to LocalClient.Create
	I0717 00:05:13.791907   21245 start.go:167] duration metric: took 10.32309443s to libmachine.API.Create "addons-957510"
	I0717 00:05:13.791919   21245 start.go:293] postStartSetup for "addons-957510" (driver="docker")
	I0717 00:05:13.791937   21245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:05:13.792020   21245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:05:13.792065   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.809908   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:13.896338   21245 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:05:13.899282   21245 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 00:05:13.899314   21245 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 00:05:13.899335   21245 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 00:05:13.899345   21245 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 00:05:13.899357   21245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12715/.minikube/addons for local assets ...
	I0717 00:05:13.899407   21245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12715/.minikube/files for local assets ...
	I0717 00:05:13.899430   21245 start.go:296] duration metric: took 107.503118ms for postStartSetup
	I0717 00:05:13.899706   21245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-957510
	I0717 00:05:13.916357   21245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/config.json ...
	I0717 00:05:13.916591   21245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:05:13.916635   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.934259   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:14.016539   21245 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 00:05:14.020627   21245 start.go:128] duration metric: took 10.643282827s to createHost
	I0717 00:05:14.020650   21245 start.go:83] releasing machines lock for "addons-957510", held for 10.643401438s
	I0717 00:05:14.020726   21245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-957510
	I0717 00:05:14.037196   21245 ssh_runner.go:195] Run: cat /version.json
	I0717 00:05:14.037234   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:14.037280   21245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:05:14.037348   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:14.053571   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:14.054408   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:14.210243   21245 ssh_runner.go:195] Run: systemctl --version
	I0717 00:05:14.214433   21245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:05:14.349623   21245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 00:05:14.353727   21245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:05:14.371504   21245 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 00:05:14.371585   21245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:05:14.397629   21245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 00:05:14.397649   21245 start.go:495] detecting cgroup driver to use...
	I0717 00:05:14.397675   21245 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:05:14.397726   21245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:05:14.410562   21245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:05:14.420099   21245 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:05:14.420153   21245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:05:14.431492   21245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:05:14.444529   21245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:05:14.517624   21245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:05:14.593371   21245 docker.go:233] disabling docker service ...
	I0717 00:05:14.593440   21245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:05:14.610877   21245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:05:14.621166   21245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:05:14.691936   21245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:05:14.769151   21245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:05:14.779432   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:05:14.793650   21245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:05:14.793706   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.802382   21245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:05:14.802446   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.811114   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.819989   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.829025   21245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:05:14.837521   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.846991   21245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.861793   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.870980   21245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:05:14.879352   21245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:05:14.887526   21245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:14.965017   21245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:05:15.061104   21245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:05:15.061167   21245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:05:15.064389   21245 start.go:563] Will wait 60s for crictl version
	I0717 00:05:15.064443   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:05:15.067475   21245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:05:15.101126   21245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 00:05:15.101237   21245 ssh_runner.go:195] Run: crio --version
	I0717 00:05:15.134531   21245 ssh_runner.go:195] Run: crio --version
	I0717 00:05:15.169419   21245 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0717 00:05:15.170913   21245 cli_runner.go:164] Run: docker network inspect addons-957510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:05:15.187540   21245 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 00:05:15.191202   21245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:15.201539   21245 kubeadm.go:883] updating cluster {Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:05:15.201659   21245 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:15.201707   21245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:15.262252   21245 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:05:15.262273   21245 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:05:15.262313   21245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:15.293794   21245 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:05:15.293813   21245 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:05:15.293820   21245 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0717 00:05:15.293900   21245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-957510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:05:15.293967   21245 ssh_runner.go:195] Run: crio config
	I0717 00:05:15.334804   21245 cni.go:84] Creating CNI manager for ""
	I0717 00:05:15.334825   21245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:15.334839   21245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:05:15.334860   21245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-957510 NodeName:addons-957510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:05:15.334992   21245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-957510"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:05:15.335052   21245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:05:15.344067   21245 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:05:15.344130   21245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:05:15.352485   21245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0717 00:05:15.368837   21245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:05:15.385751   21245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0717 00:05:15.402881   21245 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 00:05:15.406261   21245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:15.416360   21245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:15.493106   21245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:15.505127   21245 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510 for IP: 192.168.49.2
	I0717 00:05:15.505151   21245 certs.go:194] generating shared ca certs ...
	I0717 00:05:15.505166   21245 certs.go:226] acquiring lock for ca certs: {Name:mk4aaa9cd83a5144bc0eaf83922d126bac8dea0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.505284   21245 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key
	I0717 00:05:15.554459   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt ...
	I0717 00:05:15.554485   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt: {Name:mkafd762b74e91501469150fd7dec47494e5a802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.554636   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key ...
	I0717 00:05:15.554647   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key: {Name:mkf91c539d4b21ac62d660b304b2c0b65b6fafbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.554721   21245 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key
	I0717 00:05:15.651335   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.crt ...
	I0717 00:05:15.651368   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.crt: {Name:mk849693914080d208b7a0bb1b7eedd342e5c5d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.651551   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key ...
	I0717 00:05:15.651565   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key: {Name:mk30e94cb3d1c7c38b4e620d8835d82d0a2962e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.651655   21245 certs.go:256] generating profile certs ...
	I0717 00:05:15.651721   21245 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.key
	I0717 00:05:15.651743   21245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt with IP's: []
	I0717 00:05:15.717124   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt ...
	I0717 00:05:15.717166   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: {Name:mka919a48dee2862c11a053e2f7c8d1c5d4e9aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.717362   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.key ...
	I0717 00:05:15.717377   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.key: {Name:mk8b1a754a7516e074e7acb2d70958123f670a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.717474   21245 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab
	I0717 00:05:15.717497   21245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0717 00:05:15.836544   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab ...
	I0717 00:05:15.836577   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab: {Name:mkb4edbae1a4b51e3798c09f2e57c052f997d26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.836756   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab ...
	I0717 00:05:15.836770   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab: {Name:mka61dc78c62d1b78f6cfaf6e64458f43e24daf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.836843   21245 certs.go:381] copying /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab -> /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt
	I0717 00:05:15.836923   21245 certs.go:385] copying /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab -> /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key
	I0717 00:05:15.836966   21245 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key
	I0717 00:05:15.836982   21245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt with IP's: []
	I0717 00:05:15.910467   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt ...
	I0717 00:05:15.910493   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt: {Name:mkc23b3355ac79021789571e1065eafc3b48c365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.910639   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key ...
	I0717 00:05:15.910649   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key: {Name:mk782db588d186f30c2fff8f1973a8e6902f62f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.910803   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:05:15.910833   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:05:15.910857   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:05:15.910878   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/key.pem (1679 bytes)
	I0717 00:05:15.911398   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:05:15.932393   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:05:15.952472   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:05:15.972940   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:05:15.993113   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:05:16.013591   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:05:16.034284   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:05:16.054438   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:05:16.074376   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:05:16.094556   21245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:05:16.109302   21245 ssh_runner.go:195] Run: openssl version
	I0717 00:05:16.114114   21245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:05:16.122967   21245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:16.126103   21245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:16.126148   21245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:16.132287   21245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:05:16.140054   21245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:05:16.142796   21245 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:05:16.142846   21245 kubeadm.go:392] StartCluster: {Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:16.142917   21245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:05:16.142952   21245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:05:16.174043   21245 cri.go:89] found id: ""
	I0717 00:05:16.174105   21245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:05:16.182223   21245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:05:16.189918   21245 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 00:05:16.189992   21245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:05:16.199229   21245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:05:16.199246   21245 kubeadm.go:157] found existing configuration files:
	
	I0717 00:05:16.199293   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:05:16.206780   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:05:16.206830   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:05:16.214660   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:05:16.222332   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:05:16.222382   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:05:16.230312   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:05:16.237960   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:05:16.238020   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:05:16.245872   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:05:16.253325   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:05:16.253367   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:05:16.260549   21245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 00:05:16.332726   21245 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1062-gcp\n", err: exit status 1
	I0717 00:05:16.384020   21245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:05:25.834697   21245 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:05:25.834776   21245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:05:25.834908   21245 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0717 00:05:25.834989   21245 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1062-gcp
	I0717 00:05:25.835056   21245 kubeadm.go:310] OS: Linux
	I0717 00:05:25.835120   21245 kubeadm.go:310] CGROUPS_CPU: enabled
	I0717 00:05:25.835171   21245 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0717 00:05:25.835255   21245 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0717 00:05:25.835337   21245 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0717 00:05:25.835407   21245 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0717 00:05:25.835473   21245 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0717 00:05:25.835550   21245 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0717 00:05:25.835631   21245 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0717 00:05:25.835710   21245 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0717 00:05:25.835815   21245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:05:25.835975   21245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:05:25.836059   21245 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:05:25.836111   21245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:05:25.838000   21245 out.go:204]   - Generating certificates and keys ...
	I0717 00:05:25.838084   21245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:05:25.838143   21245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:05:25.838211   21245 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:05:25.838278   21245 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:05:25.838360   21245 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:05:25.838436   21245 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:05:25.838511   21245 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:05:25.838668   21245 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-957510 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:05:25.838742   21245 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:05:25.838896   21245 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-957510 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:05:25.838987   21245 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:05:25.839056   21245 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:05:25.839098   21245 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:05:25.839157   21245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:05:25.839205   21245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:05:25.839253   21245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:05:25.839304   21245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:05:25.839357   21245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:05:25.839412   21245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:05:25.839479   21245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:05:25.839533   21245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:05:25.841019   21245 out.go:204]   - Booting up control plane ...
	I0717 00:05:25.841128   21245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:05:25.841199   21245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:05:25.841255   21245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:05:25.841344   21245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:05:25.841439   21245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:05:25.841489   21245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:05:25.841617   21245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:05:25.841679   21245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:05:25.841731   21245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.112151ms
	I0717 00:05:25.841793   21245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:05:25.841843   21245 kubeadm.go:310] [api-check] The API server is healthy after 4.502007969s
	I0717 00:05:25.841940   21245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:05:25.842053   21245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:05:25.842113   21245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:05:25.842273   21245 kubeadm.go:310] [mark-control-plane] Marking the node addons-957510 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:05:25.842336   21245 kubeadm.go:310] [bootstrap-token] Using token: pl3pji.fe9z3wlbs9jxiyvg
	I0717 00:05:25.843974   21245 out.go:204]   - Configuring RBAC rules ...
	I0717 00:05:25.844090   21245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:05:25.844195   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:05:25.844327   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:05:25.844532   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:05:25.844646   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:05:25.844729   21245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:05:25.844836   21245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:05:25.844889   21245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:05:25.844936   21245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:05:25.844943   21245 kubeadm.go:310] 
	I0717 00:05:25.844999   21245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:05:25.845008   21245 kubeadm.go:310] 
	I0717 00:05:25.845085   21245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:05:25.845093   21245 kubeadm.go:310] 
	I0717 00:05:25.845114   21245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:05:25.845163   21245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:05:25.845211   21245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:05:25.845217   21245 kubeadm.go:310] 
	I0717 00:05:25.845266   21245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:05:25.845272   21245 kubeadm.go:310] 
	I0717 00:05:25.845311   21245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:05:25.845316   21245 kubeadm.go:310] 
	I0717 00:05:25.845365   21245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:05:25.845442   21245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:05:25.845499   21245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:05:25.845505   21245 kubeadm.go:310] 
	I0717 00:05:25.845580   21245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:05:25.845671   21245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:05:25.845685   21245 kubeadm.go:310] 
	I0717 00:05:25.845793   21245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pl3pji.fe9z3wlbs9jxiyvg \
	I0717 00:05:25.845887   21245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:daf389ec49e00d61976d9dc190f73df8121e276c738a86d1ec306a03abd6f344 \
	I0717 00:05:25.845912   21245 kubeadm.go:310] 	--control-plane 
	I0717 00:05:25.845919   21245 kubeadm.go:310] 
	I0717 00:05:25.845991   21245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:05:25.845998   21245 kubeadm.go:310] 
	I0717 00:05:25.846069   21245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pl3pji.fe9z3wlbs9jxiyvg \
	I0717 00:05:25.846184   21245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:daf389ec49e00d61976d9dc190f73df8121e276c738a86d1ec306a03abd6f344 
	I0717 00:05:25.846200   21245 cni.go:84] Creating CNI manager for ""
	I0717 00:05:25.846211   21245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:25.848001   21245 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:05:25.849232   21245 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:05:25.852856   21245 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:05:25.852879   21245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:05:25.869210   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:05:26.065353   21245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:05:26.065449   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:26.065454   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-957510 minikube.k8s.io/updated_at=2024_07_17T00_05_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=addons-957510 minikube.k8s.io/primary=true
	I0717 00:05:26.072284   21245 ops.go:34] apiserver oom_adj: -16
	I0717 00:05:26.224718   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:26.724773   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:27.225516   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:27.724909   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:28.225121   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:28.725077   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:29.225763   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:29.725169   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:30.225573   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:30.725487   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:31.224864   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:31.725124   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:32.225774   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:32.725129   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:33.225121   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:33.725690   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:34.225180   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:34.725112   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:35.224843   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:35.725088   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.225749   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.725093   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.224825   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.724807   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.225750   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.724748   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.224931   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.724783   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.225523   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.289315   21245 kubeadm.go:1113] duration metric: took 14.223923759s to wait for elevateKubeSystemPrivileges
	I0717 00:05:40.289352   21245 kubeadm.go:394] duration metric: took 24.14650761s to StartCluster
	I0717 00:05:40.289372   21245 settings.go:142] acquiring lock: {Name:mk9a09422d46b143eae10f5996fa2de67145de97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:40.289483   21245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:05:40.289964   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/kubeconfig: {Name:mkf7e1e083f0112534ba419cb3d886353389254d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:40.290197   21245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:05:40.290238   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:05:40.290319   21245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:05:40.290390   21245 config.go:182] Loaded profile config "addons-957510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:40.290415   21245 addons.go:69] Setting ingress-dns=true in profile "addons-957510"
	I0717 00:05:40.290428   21245 addons.go:69] Setting helm-tiller=true in profile "addons-957510"
	I0717 00:05:40.290430   21245 addons.go:69] Setting metrics-server=true in profile "addons-957510"
	I0717 00:05:40.290439   21245 addons.go:69] Setting gcp-auth=true in profile "addons-957510"
	I0717 00:05:40.290452   21245 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-957510"
	I0717 00:05:40.290461   21245 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-957510"
	I0717 00:05:40.290462   21245 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-957510"
	I0717 00:05:40.290467   21245 addons.go:69] Setting registry=true in profile "addons-957510"
	I0717 00:05:40.290470   21245 mustload.go:65] Loading cluster: addons-957510
	I0717 00:05:40.290480   21245 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-957510"
	I0717 00:05:40.290473   21245 addons.go:69] Setting volcano=true in profile "addons-957510"
	I0717 00:05:40.290488   21245 addons.go:69] Setting volumesnapshots=true in profile "addons-957510"
	I0717 00:05:40.290506   21245 addons.go:234] Setting addon volcano=true in "addons-957510"
	I0717 00:05:40.290461   21245 addons.go:69] Setting storage-provisioner=true in profile "addons-957510"
	I0717 00:05:40.290515   21245 addons.go:234] Setting addon volumesnapshots=true in "addons-957510"
	I0717 00:05:40.290526   21245 addons.go:234] Setting addon storage-provisioner=true in "addons-957510"
	I0717 00:05:40.290539   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290539   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290552   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290454   21245 addons.go:234] Setting addon helm-tiller=true in "addons-957510"
	I0717 00:05:40.290592   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290676   21245 config.go:182] Loaded profile config "addons-957510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:40.290813   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290983   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291012   21245 addons.go:69] Setting ingress=true in profile "addons-957510"
	I0717 00:05:40.291058   21245 addons.go:234] Setting addon ingress=true in "addons-957510"
	I0717 00:05:40.291097   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290433   21245 addons.go:69] Setting default-storageclass=true in profile "addons-957510"
	I0717 00:05:40.291133   21245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-957510"
	I0717 00:05:40.290506   21245 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-957510"
	I0717 00:05:40.291181   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290483   21245 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-957510"
	I0717 00:05:40.291235   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.291398   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291522   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291601   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290456   21245 addons.go:234] Setting addon metrics-server=true in "addons-957510"
	I0717 00:05:40.291647   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291665   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290455   21245 addons.go:234] Setting addon ingress-dns=true in "addons-957510"
	I0717 00:05:40.292058   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.292111   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290413   21245 addons.go:69] Setting yakd=true in profile "addons-957510"
	I0717 00:05:40.292591   21245 addons.go:234] Setting addon yakd=true in "addons-957510"
	I0717 00:05:40.292619   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290983   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290484   21245 addons.go:234] Setting addon registry=true in "addons-957510"
	I0717 00:05:40.293178   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.293859   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.292628   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.294920   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290985   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290993   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.296935   21245 out.go:177] * Verifying Kubernetes components...
	I0717 00:05:40.290448   21245 addons.go:69] Setting cloud-spanner=true in profile "addons-957510"
	I0717 00:05:40.297142   21245 addons.go:234] Setting addon cloud-spanner=true in "addons-957510"
	I0717 00:05:40.297186   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.297688   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290422   21245 addons.go:69] Setting inspektor-gadget=true in profile "addons-957510"
	I0717 00:05:40.298417   21245 addons.go:234] Setting addon inspektor-gadget=true in "addons-957510"
	I0717 00:05:40.298479   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.299023   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290991   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.308189   21245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:40.332163   21245 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-957510"
	I0717 00:05:40.332209   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.332658   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.337398   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:05:40.338952   21245 addons.go:234] Setting addon default-storageclass=true in "addons-957510"
	I0717 00:05:40.338996   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.339434   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.345522   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:05:40.347275   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:05:40.347306   21245 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:05:40.347371   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.347562   21245 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:05:40.345531   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:05:40.349426   21245 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:40.349445   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:05:40.349503   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.352017   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:05:40.353580   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:05:40.354963   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 00:05:40.356351   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:05:40.357606   21245 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:05:40.357693   21245 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:05:40.357763   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 00:05:40.359435   21245 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:05:40.359466   21245 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:05:40.359487   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:05:40.359507   21245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:05:40.359539   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.359566   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.361600   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:05:40.363142   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:05:40.363166   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:05:40.363227   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.363389   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:05:40.365573   21245 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:05:40.367040   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:40.367305   21245 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:40.367336   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:05:40.367406   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.377906   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:40.382199   21245 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:40.382234   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:05:40.382296   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.391074   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.402013   21245 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:05:40.402053   21245 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:05:40.403588   21245 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:40.403610   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:05:40.403671   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.403903   21245 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:05:40.404077   21245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:40.404089   21245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:05:40.404136   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.405905   21245 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:05:40.405922   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:05:40.405965   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.407111   21245 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 00:05:40.408687   21245 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 00:05:40.408707   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 00:05:40.408771   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	W0717 00:05:40.411030   21245 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:05:40.418281   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.426810   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.446440   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.450468   21245 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:05:40.450521   21245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:05:40.452726   21245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:40.452749   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:05:40.452808   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.452726   21245 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:05:40.454267   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.455222   21245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:40.455239   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:05:40.455298   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.455524   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.455543   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.469550   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.471817   21245 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:05:40.473451   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:05:40.473475   21245 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:05:40.473542   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.479011   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.481412   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.486103   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.486187   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.490302   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.496652   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.496754   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	W0717 00:05:40.528216   21245 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:05:40.528253   21245 retry.go:31] will retry after 360.056519ms: ssh: handshake failed: EOF
	W0717 00:05:40.528291   21245 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:05:40.528310   21245 retry.go:31] will retry after 261.845108ms: ssh: handshake failed: EOF
	I0717 00:05:40.624237   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:05:40.735370   21245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:40.821077   21245 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:05:40.821161   21245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:05:40.839961   21245 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:05:40.840009   21245 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:05:40.922005   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:05:40.922096   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:05:40.940726   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:40.941452   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:41.023206   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:41.023635   21245 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 00:05:41.023661   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:05:41.023660   21245 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 00:05:41.023696   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:05:41.025458   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:41.035913   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:05:41.035943   21245 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:05:41.043287   21245 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:05:41.043375   21245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:05:41.044905   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:41.126630   21245 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:05:41.126712   21245 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:05:41.129838   21245 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:05:41.129860   21245 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:05:41.322275   21245 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:05:41.322314   21245 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:05:41.324830   21245 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:05:41.324855   21245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:05:41.325742   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:05:41.325767   21245 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:05:41.332315   21245 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:41.332350   21245 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 00:05:41.420488   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:05:41.420521   21245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:05:41.421123   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:05:41.421188   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:05:41.422239   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:41.439076   21245 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:41.439169   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:05:41.530779   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:05:41.530810   21245 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:05:41.621367   21245 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:05:41.621396   21245 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:05:41.621634   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:05:41.621649   21245 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:05:41.631199   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:41.643551   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:41.721350   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:41.721380   21245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:05:41.723105   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:05:41.723178   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:05:41.833302   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:41.833382   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:05:41.835963   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:41.926884   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:41.930449   21245 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:05:41.930477   21245 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:05:41.939152   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:05:41.939181   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:05:42.222224   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:05:42.222270   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:05:42.222489   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:42.234279   21245 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:42.234307   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:05:42.528377   21245 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:05:42.528411   21245 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:05:42.631382   21245 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.007099195s)
	I0717 00:05:42.631418   21245 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 00:05:42.632650   21245 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.897249827s)
	I0717 00:05:42.633565   21245 node_ready.go:35] waiting up to 6m0s for node "addons-957510" to be "Ready" ...
	I0717 00:05:42.734503   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:42.921477   21245 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:42.921573   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:05:42.921942   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:05:42.921987   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:05:43.321374   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:43.333438   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:05:43.333468   21245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:05:43.336006   21245 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-957510" context rescaled to 1 replicas
	I0717 00:05:43.839919   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:05:43.840033   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:05:44.034734   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:05:44.034825   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:05:44.322435   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:44.322513   21245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:05:44.426203   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.485372961s)
	I0717 00:05:44.426368   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.484870972s)
	I0717 00:05:44.426436   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.403199819s)
	I0717 00:05:44.426479   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.400981247s)
	I0717 00:05:44.521366   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:44.723432   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:46.821237   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.776281242s)
	I0717 00:05:46.821403   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.177821841s)
	I0717 00:05:46.821410   21245 addons.go:475] Verifying addon ingress=true in "addons-957510"
	I0717 00:05:46.821471   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.98547469s)
	I0717 00:05:46.821375   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.190125888s)
	I0717 00:05:46.821564   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.894647637s)
	I0717 00:05:46.821582   21245 addons.go:475] Verifying addon metrics-server=true in "addons-957510"
	I0717 00:05:46.821499   21245 addons.go:475] Verifying addon registry=true in "addons-957510"
	I0717 00:05:46.821651   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.59911204s)
	I0717 00:05:46.821276   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.398974705s)
	I0717 00:05:46.823383   21245 out.go:177] * Verifying registry addon...
	I0717 00:05:46.823385   21245 out.go:177] * Verifying ingress addon...
	I0717 00:05:46.824385   21245 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-957510 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:05:46.826030   21245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:05:46.826942   21245 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:05:46.832535   21245 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:05:46.832563   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:46.832844   21245 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:05:46.832866   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:47.137167   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:47.333560   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:47.334170   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:47.546384   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.81183177s)
	W0717 00:05:47.546431   21245 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:47.546445   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.224966953s)
	I0717 00:05:47.546453   21245 retry.go:31] will retry after 277.065396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:47.625437   21245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:05:47.625515   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:47.651370   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:47.823717   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:47.831339   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:47.832445   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:47.842012   21245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:05:47.931226   21245 addons.go:234] Setting addon gcp-auth=true in "addons-957510"
	I0717 00:05:47.931288   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:47.931805   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:47.960041   21245 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:05:47.960084   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:47.977681   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:48.330394   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:48.334118   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:48.439918   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.918410304s)
	I0717 00:05:48.439964   21245 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-957510"
	I0717 00:05:48.442652   21245 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:05:48.444562   21245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:05:48.451481   21245 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:05:48.451505   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:48.830207   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:48.830238   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:48.948603   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:49.329954   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:49.330327   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:49.448271   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:49.636987   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:49.830924   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:49.831554   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:49.950228   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:50.333816   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:50.334737   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:50.449485   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:50.830582   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:50.830627   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:50.948855   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:51.048818   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.225053499s)
	I0717 00:05:51.048996   21245 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.088924153s)
	I0717 00:05:51.051611   21245 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:05:51.053398   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:51.054868   21245 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:05:51.054889   21245 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:05:51.073647   21245 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:05:51.073672   21245 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:05:51.122467   21245 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:51.122492   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:05:51.142149   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:51.330163   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:51.331105   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:51.449925   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:51.638982   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:51.747966   21245 addons.go:475] Verifying addon gcp-auth=true in "addons-957510"
	I0717 00:05:51.749181   21245 out.go:177] * Verifying gcp-auth addon...
	I0717 00:05:51.751337   21245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:05:51.755324   21245 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:05:51.755345   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:51.830473   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:51.830658   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:51.949614   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:52.254558   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:52.330686   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:52.331939   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:52.449541   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:52.754786   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:52.830481   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:52.831091   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:52.948782   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:53.254939   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:53.329925   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:53.330211   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:53.448274   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:53.754431   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:53.829876   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:53.830323   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:53.948276   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:54.136733   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:54.254303   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:54.329541   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:54.329952   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:54.448962   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:54.755254   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:54.830269   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:54.830695   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:54.948844   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:55.254971   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:55.329899   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:55.329988   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:55.448840   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:55.755271   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:55.830223   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:55.830223   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:55.948408   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:56.136818   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:56.254330   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:56.329487   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:56.330375   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:56.448715   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:56.754908   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:56.830036   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:56.830171   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:56.948406   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:57.253969   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:57.329967   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:57.330149   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:57.450347   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:57.754049   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:57.829859   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:57.830001   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:57.948369   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:58.136994   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:58.254489   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:58.330032   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.330482   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:58.448203   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:58.754197   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:58.830521   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.830626   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:58.948971   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:59.138393   21245 node_ready.go:49] node "addons-957510" has status "Ready":"True"
	I0717 00:05:59.138422   21245 node_ready.go:38] duration metric: took 16.504830191s for node "addons-957510" to be "Ready" ...
	I0717 00:05:59.138435   21245 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:05:59.149562   21245 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5wj8z" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:59.254345   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:59.331442   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.332157   21245 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:05:59.332179   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.450066   21245 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:05:59.450094   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:59.755180   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:59.832570   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.834009   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.950506   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.254372   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:00.330308   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.330405   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.449312   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.654274   21245 pod_ready.go:92] pod "coredns-7db6d8ff4d-5wj8z" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.654298   21245 pod_ready.go:81] duration metric: took 1.504708039s for pod "coredns-7db6d8ff4d-5wj8z" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.654326   21245 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.657975   21245 pod_ready.go:92] pod "etcd-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.657995   21245 pod_ready.go:81] duration metric: took 3.660239ms for pod "etcd-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.658009   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.661675   21245 pod_ready.go:92] pod "kube-apiserver-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.661693   21245 pod_ready.go:81] duration metric: took 3.676497ms for pod "kube-apiserver-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.661703   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.665215   21245 pod_ready.go:92] pod "kube-controller-manager-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.665233   21245 pod_ready.go:81] duration metric: took 3.522159ms for pod "kube-controller-manager-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.665243   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bvcbh" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.736325   21245 pod_ready.go:92] pod "kube-proxy-bvcbh" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.736345   21245 pod_ready.go:81] duration metric: took 71.096153ms for pod "kube-proxy-bvcbh" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.736355   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.754844   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:00.830768   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.831169   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.950039   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.137128   21245 pod_ready.go:92] pod "kube-scheduler-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:01.137149   21245 pod_ready.go:81] duration metric: took 400.788339ms for pod "kube-scheduler-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:01.137159   21245 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:01.255202   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.331365   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.331506   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.450481   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.754954   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.830500   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.830653   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.954936   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.326797   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.334465   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:02.335639   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.524700   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.754987   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.837098   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.838232   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.027494   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.143030   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:03.255526   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.331241   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.331741   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.450124   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.755017   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.830999   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.831292   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.950023   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.255624   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.330884   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.331113   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.450006   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.754902   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.830770   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.830892   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.950240   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.144002   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:05.255579   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.331425   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.331538   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.450547   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.755020   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.830877   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.833001   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.950197   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.255180   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.330733   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.331085   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.450290   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.755775   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.831623   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.832064   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.951191   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.254828   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.330809   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.330896   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.450409   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.642459   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:07.755173   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.832818   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.832970   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.950195   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.254777   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.331105   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.331331   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:08.451295   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.755668   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.830875   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.831415   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.024729   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.254913   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.331352   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.331545   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.450970   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.642953   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:09.755185   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.831255   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.831308   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.949609   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.254232   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.330894   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.331148   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.449579   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.755476   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.830745   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.832312   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.950316   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.255564   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.331609   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.331744   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.451629   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.645832   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:11.754465   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.830689   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.830947   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.949932   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.254823   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.331200   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.331632   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.449937   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.754760   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.830708   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.831008   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.950092   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.255080   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:13.330630   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.330882   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:13.450067   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.754989   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:13.830909   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:13.831073   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.950222   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.143250   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:14.256478   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.330371   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.330423   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.449388   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.754822   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.830685   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.831407   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.950934   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.255567   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.330984   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.331226   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.450756   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.754796   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.830733   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.831115   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.949289   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.254579   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.330852   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.330882   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.450030   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.642044   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:16.754779   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.830856   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.830909   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.950257   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.255116   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.330750   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.330928   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.451240   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.754986   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.830900   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.830981   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.951246   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.323618   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.335537   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:18.336563   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.527237   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.726747   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:18.823253   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.840076   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.841236   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.026725   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.255056   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.332099   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.333460   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.453834   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.754939   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.831362   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.832922   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.953115   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.255213   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.331181   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.331284   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.450284   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.755310   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.831468   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.832905   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.950812   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.144028   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:21.254929   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.331109   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.331360   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.450015   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.755391   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.831260   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.831385   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.950230   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.255364   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.331136   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.331288   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.450207   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.755098   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.831270   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.831567   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.951305   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.255303   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.330389   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.330966   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.450876   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.643273   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:23.755209   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.831959   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.833263   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.950238   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.255124   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.330988   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.331203   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.450277   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.755001   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.830809   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.831099   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.949616   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.255583   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.331497   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.331603   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.450098   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.644095   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:25.755446   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.831364   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.832893   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.950450   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.254643   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.330690   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.331434   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.452061   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.755258   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.830811   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.831087   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.950047   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.255116   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.330624   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.330665   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:27.450395   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.646433   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:27.755428   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.830953   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.831596   21245 kapi.go:107] duration metric: took 41.005565184s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:06:27.952827   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.254780   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.331385   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.449193   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.755091   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.831199   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.950348   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.254841   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.331596   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.449886   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.755541   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.831474   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.950891   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.143254   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:30.255974   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.332060   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.451187   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.754846   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.832198   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.949829   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.254883   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.331015   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.449893   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.754807   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.831754   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.949567   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.143907   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:32.254613   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.331329   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.449806   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.755216   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.831434   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.949273   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.254668   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.330809   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.449694   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.754996   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.831528   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.949803   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.255316   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.331863   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.450138   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.642782   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:34.754572   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.830952   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.949731   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.254907   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.331386   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.450081   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.754802   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.831046   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.950258   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.256504   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.331488   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.451183   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.644033   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:36.754874   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.831031   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.950748   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.255412   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.331291   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.453741   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.755159   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.831789   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.951728   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.255496   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.332068   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.450049   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.755065   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.831274   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.950415   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.143108   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:39.254782   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.331465   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.449097   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.754847   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.831293   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.950081   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.255444   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.331928   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.450048   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.755063   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.831661   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.949194   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.255587   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.331252   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.451187   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.644528   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:41.754954   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.832019   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.950596   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.254797   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:42.331863   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.449568   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.755550   21245 kapi.go:107] duration metric: took 51.004207672s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:06:42.758354   21245 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-957510 cluster.
	I0717 00:06:42.760116   21245 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:06:42.761511   21245 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:06:42.831181   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.949777   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.330742   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.456434   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.831775   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.949989   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.142204   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:44.330626   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.448982   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.831203   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.950305   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:45.331273   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:45.451039   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:45.831310   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:45.949582   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.143335   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:46.330469   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.449195   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.831258   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.950217   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.331065   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.449905   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.830579   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.949310   21245 kapi.go:107] duration metric: took 59.504747866s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:06:48.331078   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:48.642815   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:48.831257   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.331100   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.830795   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.331397   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.831085   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.143018   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:51.331437   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.831437   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.330930   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.831153   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.143596   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:53.331003   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.831296   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.331449   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.831075   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.331430   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.642791   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:55.833095   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.331236   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.831085   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.331035   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.831476   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.142863   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:58.331417   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.830738   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.330749   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.830895   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.143338   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:00.330566   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.830711   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.330563   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.831410   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.143439   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:02.330982   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.830807   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.330808   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.831304   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.143736   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:04.332241   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.832348   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.331212   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.831715   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.143832   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:06.331036   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.831462   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.331049   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.830992   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.330678   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.642074   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:08.831397   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.331164   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.830721   21245 kapi.go:107] duration metric: took 1m23.003777151s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:07:09.832590   21245 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, helm-tiller, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0717 00:07:09.834005   21245 addons.go:510] duration metric: took 1m29.543687281s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin default-storageclass helm-tiller metrics-server storage-provisioner yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0717 00:07:10.642455   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:12.643827   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:15.142476   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:17.142634   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:19.143006   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:21.642798   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:24.142365   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:26.142821   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:28.644691   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:31.142168   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:33.143631   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:35.643532   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:36.142452   21245 pod_ready.go:92] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:36.142475   21245 pod_ready.go:81] duration metric: took 1m35.005309819s for pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:36.142485   21245 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vxl6w" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:36.146675   21245 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-vxl6w" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:36.146695   21245 pod_ready.go:81] duration metric: took 4.20394ms for pod "nvidia-device-plugin-daemonset-vxl6w" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:36.146716   21245 pod_ready.go:38] duration metric: took 1m37.008269238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:07:36.146731   21245 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:07:36.146758   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:07:36.146804   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:07:36.180581   21245 cri.go:89] found id: "81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:36.180605   21245 cri.go:89] found id: ""
	I0717 00:07:36.180614   21245 logs.go:276] 1 containers: [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef]
	I0717 00:07:36.180670   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.183910   21245 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:07:36.183977   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:07:36.216421   21245 cri.go:89] found id: "fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:36.216447   21245 cri.go:89] found id: ""
	I0717 00:07:36.216457   21245 logs.go:276] 1 containers: [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba]
	I0717 00:07:36.216505   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.219574   21245 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:07:36.219624   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:07:36.251233   21245 cri.go:89] found id: "c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:36.251257   21245 cri.go:89] found id: ""
	I0717 00:07:36.251266   21245 logs.go:276] 1 containers: [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a]
	I0717 00:07:36.251307   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.254411   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:07:36.254459   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:07:36.286533   21245 cri.go:89] found id: "2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:36.286560   21245 cri.go:89] found id: ""
	I0717 00:07:36.286570   21245 logs.go:276] 1 containers: [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd]
	I0717 00:07:36.286621   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.289736   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:07:36.289798   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:07:36.323138   21245 cri.go:89] found id: "11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:36.323171   21245 cri.go:89] found id: ""
	I0717 00:07:36.323181   21245 logs.go:276] 1 containers: [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b]
	I0717 00:07:36.323229   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.326540   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:07:36.326606   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:07:36.358699   21245 cri.go:89] found id: "71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:36.358726   21245 cri.go:89] found id: ""
	I0717 00:07:36.358736   21245 logs.go:276] 1 containers: [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6]
	I0717 00:07:36.358779   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.361860   21245 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:07:36.361921   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:07:36.394291   21245 cri.go:89] found id: "0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:36.394311   21245 cri.go:89] found id: ""
	I0717 00:07:36.394318   21245 logs.go:276] 1 containers: [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5]
	I0717 00:07:36.394371   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.397695   21245 logs.go:123] Gathering logs for kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] ...
	I0717 00:07:36.397729   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:36.442031   21245 logs.go:123] Gathering logs for etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] ...
	I0717 00:07:36.442067   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:36.487909   21245 logs.go:123] Gathering logs for coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] ...
	I0717 00:07:36.487943   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:36.523250   21245 logs.go:123] Gathering logs for kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] ...
	I0717 00:07:36.523279   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:36.566977   21245 logs.go:123] Gathering logs for kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] ...
	I0717 00:07:36.567017   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:36.600786   21245 logs.go:123] Gathering logs for kubelet ...
	I0717 00:07:36.600811   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 00:07:36.622962   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.128879    1742 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623157   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.128994    1742 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623341   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129050    1742 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623554   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623751   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623991   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.624189   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.624394   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:36.665853   21245 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:07:36.665886   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:07:36.762597   21245 logs.go:123] Gathering logs for kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] ...
	I0717 00:07:36.762650   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:36.802172   21245 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:07:36.802206   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:07:36.878845   21245 logs.go:123] Gathering logs for container status ...
	I0717 00:07:36.878880   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:07:36.921163   21245 logs.go:123] Gathering logs for dmesg ...
	I0717 00:07:36.921192   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:07:36.933602   21245 logs.go:123] Gathering logs for kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] ...
	I0717 00:07:36.933639   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:36.989012   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:36.989040   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 00:07:36.989096   21245 out.go:239] X Problems detected in kubelet:
	W0717 00:07:36.989112   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989128   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989142   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989154   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989165   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:36.989175   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:36.989182   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:07:46.989712   21245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:07:47.003539   21245 api_server.go:72] duration metric: took 2m6.713307726s to wait for apiserver process to appear ...
	I0717 00:07:47.003570   21245 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:07:47.003624   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:07:47.003729   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:07:47.039149   21245 cri.go:89] found id: "81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:47.039173   21245 cri.go:89] found id: ""
	I0717 00:07:47.039182   21245 logs.go:276] 1 containers: [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef]
	I0717 00:07:47.039238   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.042577   21245 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:07:47.042640   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:07:47.078678   21245 cri.go:89] found id: "fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:47.078729   21245 cri.go:89] found id: ""
	I0717 00:07:47.078738   21245 logs.go:276] 1 containers: [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba]
	I0717 00:07:47.078790   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.082431   21245 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:07:47.082495   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:07:47.116999   21245 cri.go:89] found id: "c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:47.117024   21245 cri.go:89] found id: ""
	I0717 00:07:47.117031   21245 logs.go:276] 1 containers: [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a]
	I0717 00:07:47.117080   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.120325   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:07:47.120383   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:07:47.152967   21245 cri.go:89] found id: "2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:47.152990   21245 cri.go:89] found id: ""
	I0717 00:07:47.152997   21245 logs.go:276] 1 containers: [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd]
	I0717 00:07:47.153039   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.156347   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:07:47.156406   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:07:47.188920   21245 cri.go:89] found id: "11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:47.188940   21245 cri.go:89] found id: ""
	I0717 00:07:47.188948   21245 logs.go:276] 1 containers: [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b]
	I0717 00:07:47.188993   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.192211   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:07:47.192284   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:07:47.224803   21245 cri.go:89] found id: "71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:47.224825   21245 cri.go:89] found id: ""
	I0717 00:07:47.224832   21245 logs.go:276] 1 containers: [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6]
	I0717 00:07:47.224879   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.228032   21245 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:07:47.228082   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:07:47.260640   21245 cri.go:89] found id: "0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:47.260659   21245 cri.go:89] found id: ""
	I0717 00:07:47.260665   21245 logs.go:276] 1 containers: [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5]
	I0717 00:07:47.260733   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.264035   21245 logs.go:123] Gathering logs for kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] ...
	I0717 00:07:47.264062   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:47.307542   21245 logs.go:123] Gathering logs for etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] ...
	I0717 00:07:47.307581   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:47.352648   21245 logs.go:123] Gathering logs for coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] ...
	I0717 00:07:47.352683   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:47.388745   21245 logs.go:123] Gathering logs for kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] ...
	I0717 00:07:47.388778   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:47.422861   21245 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:07:47.422896   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:07:47.526454   21245 logs.go:123] Gathering logs for dmesg ...
	I0717 00:07:47.526488   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:07:47.538917   21245 logs.go:123] Gathering logs for kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] ...
	I0717 00:07:47.538947   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:47.581561   21245 logs.go:123] Gathering logs for kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] ...
	I0717 00:07:47.581594   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:47.634834   21245 logs.go:123] Gathering logs for kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] ...
	I0717 00:07:47.634869   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:47.676127   21245 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:07:47.676157   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:07:47.754501   21245 logs.go:123] Gathering logs for container status ...
	I0717 00:07:47.754541   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:07:47.797074   21245 logs.go:123] Gathering logs for kubelet ...
	I0717 00:07:47.797103   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 00:07:47.823378   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.128879    1742 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.823577   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.128994    1742 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.823770   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129050    1742 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.823994   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824188   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824406   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824599   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824812   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:47.866461   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:47.866498   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 00:07:47.866552   21245 out.go:239] X Problems detected in kubelet:
	W0717 00:07:47.866562   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866572   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866583   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866592   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866603   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:47.866610   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:47.866617   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:07:57.868434   21245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 00:07:57.872226   21245 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 00:07:57.873112   21245 api_server.go:141] control plane version: v1.30.2
	I0717 00:07:57.873137   21245 api_server.go:131] duration metric: took 10.869560009s to wait for apiserver health ...
	I0717 00:07:57.873147   21245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:07:57.873170   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:07:57.873225   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:07:57.908153   21245 cri.go:89] found id: "81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:57.908178   21245 cri.go:89] found id: ""
	I0717 00:07:57.908187   21245 logs.go:276] 1 containers: [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef]
	I0717 00:07:57.908243   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:57.911546   21245 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:07:57.911616   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:07:57.946468   21245 cri.go:89] found id: "fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:57.946493   21245 cri.go:89] found id: ""
	I0717 00:07:57.946500   21245 logs.go:276] 1 containers: [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba]
	I0717 00:07:57.946544   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:57.949901   21245 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:07:57.949957   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:07:57.983995   21245 cri.go:89] found id: "c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:57.984023   21245 cri.go:89] found id: ""
	I0717 00:07:57.984032   21245 logs.go:276] 1 containers: [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a]
	I0717 00:07:57.984096   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:57.987384   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:07:57.987442   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:07:58.022253   21245 cri.go:89] found id: "2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:58.022278   21245 cri.go:89] found id: ""
	I0717 00:07:58.022287   21245 logs.go:276] 1 containers: [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd]
	I0717 00:07:58.022341   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.025883   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:07:58.025946   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:07:58.060200   21245 cri.go:89] found id: "11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:58.060226   21245 cri.go:89] found id: ""
	I0717 00:07:58.060237   21245 logs.go:276] 1 containers: [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b]
	I0717 00:07:58.060288   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.063427   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:07:58.063486   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:07:58.096823   21245 cri.go:89] found id: "71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:58.096842   21245 cri.go:89] found id: ""
	I0717 00:07:58.096849   21245 logs.go:276] 1 containers: [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6]
	I0717 00:07:58.096893   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.100054   21245 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:07:58.100105   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:07:58.132179   21245 cri.go:89] found id: "0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:58.132202   21245 cri.go:89] found id: ""
	I0717 00:07:58.132213   21245 logs.go:276] 1 containers: [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5]
	I0717 00:07:58.132263   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.135395   21245 logs.go:123] Gathering logs for kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] ...
	I0717 00:07:58.135419   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:58.190400   21245 logs.go:123] Gathering logs for container status ...
	I0717 00:07:58.190435   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:07:58.232614   21245 logs.go:123] Gathering logs for coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] ...
	I0717 00:07:58.232645   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:58.268084   21245 logs.go:123] Gathering logs for kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] ...
	I0717 00:07:58.268115   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:58.308206   21245 logs.go:123] Gathering logs for kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] ...
	I0717 00:07:58.308242   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:58.342386   21245 logs.go:123] Gathering logs for kubelet ...
	I0717 00:07:58.342419   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 00:07:58.367512   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.128879    1742 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.367685   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.128994    1742 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.367825   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129050    1742 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368007   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368145   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368295   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368427   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368576   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:58.410033   21245 logs.go:123] Gathering logs for dmesg ...
	I0717 00:07:58.410082   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:07:58.422489   21245 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:07:58.422519   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:07:58.516296   21245 logs.go:123] Gathering logs for kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] ...
	I0717 00:07:58.516325   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:58.559240   21245 logs.go:123] Gathering logs for etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] ...
	I0717 00:07:58.559276   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:58.602314   21245 logs.go:123] Gathering logs for kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] ...
	I0717 00:07:58.602346   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:58.641290   21245 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:07:58.641322   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:07:58.715089   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:58.715136   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 00:07:58.715211   21245 out.go:239] X Problems detected in kubelet:
	W0717 00:07:58.715227   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715238   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715255   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715267   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715277   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:58.715288   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:58.715298   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:08:08.726850   21245 system_pods.go:59] 19 kube-system pods found
	I0717 00:08:08.726881   21245 system_pods.go:61] "coredns-7db6d8ff4d-5wj8z" [ebab405b-8b19-41b4-9ade-70d1f44663f0] Running
	I0717 00:08:08.726886   21245 system_pods.go:61] "csi-hostpath-attacher-0" [5119bf74-f492-4daa-b7a6-c340cefcd844] Running
	I0717 00:08:08.726890   21245 system_pods.go:61] "csi-hostpath-resizer-0" [60d3a254-9888-4984-999b-4320716ef437] Running
	I0717 00:08:08.726893   21245 system_pods.go:61] "csi-hostpathplugin-bwnfc" [113aede1-ee6e-49c7-8b2a-fe74ff0c0c03] Running
	I0717 00:08:08.726896   21245 system_pods.go:61] "etcd-addons-957510" [803445be-4a19-4b62-bb2d-adbc1c8b3a11] Running
	I0717 00:08:08.726900   21245 system_pods.go:61] "kindnet-t5p77" [64ea96f1-5fab-40b2-a150-c72cd0f61dff] Running
	I0717 00:08:08.726903   21245 system_pods.go:61] "kube-apiserver-addons-957510" [23d09e74-2585-4bad-a247-5bd11626c398] Running
	I0717 00:08:08.726906   21245 system_pods.go:61] "kube-controller-manager-addons-957510" [1d67f06b-27c4-468e-8a35-a581d913ac10] Running
	I0717 00:08:08.726910   21245 system_pods.go:61] "kube-ingress-dns-minikube" [5e1c5890-c2f9-4c82-aa6c-8895839fcb19] Running
	I0717 00:08:08.726913   21245 system_pods.go:61] "kube-proxy-bvcbh" [6c52b57c-87eb-4842-a98f-48d9bd361f7b] Running
	I0717 00:08:08.726917   21245 system_pods.go:61] "kube-scheduler-addons-957510" [196ec5e6-6d64-4664-a494-23b5eb636cd3] Running
	I0717 00:08:08.726921   21245 system_pods.go:61] "metrics-server-c59844bb4-6hgp6" [40f452f3-f225-4b33-88fc-6a0362123620] Running
	I0717 00:08:08.726924   21245 system_pods.go:61] "nvidia-device-plugin-daemonset-vxl6w" [62fe154c-efaa-413e-90ec-020e5c5db0b7] Running
	I0717 00:08:08.726931   21245 system_pods.go:61] "registry-proxy-nqrkw" [23e004ae-eb71-4040-bb09-9a393ed5044a] Running
	I0717 00:08:08.726934   21245 system_pods.go:61] "registry-stqvk" [ab363c33-d118-4417-9ebe-8caaebc1efff] Running
	I0717 00:08:08.726937   21245 system_pods.go:61] "snapshot-controller-745499f584-9qb2w" [0e7210a7-2baa-4549-8515-5520d4d2ec1e] Running
	I0717 00:08:08.726940   21245 system_pods.go:61] "snapshot-controller-745499f584-qp49p" [725fc7fb-25ca-4913-a457-76c2f14a3fa9] Running
	I0717 00:08:08.726943   21245 system_pods.go:61] "storage-provisioner" [f782a017-1180-4eb6-8c64-0519925113e2] Running
	I0717 00:08:08.726948   21245 system_pods.go:61] "tiller-deploy-6677d64bcd-qmhpn" [dd11389b-b3d6-4f2a-b725-9f58dcbc7c1c] Running
	I0717 00:08:08.726953   21245 system_pods.go:74] duration metric: took 10.853800724s to wait for pod list to return data ...
	I0717 00:08:08.726961   21245 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:08:08.728940   21245 default_sa.go:45] found service account: "default"
	I0717 00:08:08.728959   21245 default_sa.go:55] duration metric: took 1.990301ms for default service account to be created ...
	I0717 00:08:08.728967   21245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:08:08.737814   21245 system_pods.go:86] 19 kube-system pods found
	I0717 00:08:08.737843   21245 system_pods.go:89] "coredns-7db6d8ff4d-5wj8z" [ebab405b-8b19-41b4-9ade-70d1f44663f0] Running
	I0717 00:08:08.737849   21245 system_pods.go:89] "csi-hostpath-attacher-0" [5119bf74-f492-4daa-b7a6-c340cefcd844] Running
	I0717 00:08:08.737853   21245 system_pods.go:89] "csi-hostpath-resizer-0" [60d3a254-9888-4984-999b-4320716ef437] Running
	I0717 00:08:08.737858   21245 system_pods.go:89] "csi-hostpathplugin-bwnfc" [113aede1-ee6e-49c7-8b2a-fe74ff0c0c03] Running
	I0717 00:08:08.737862   21245 system_pods.go:89] "etcd-addons-957510" [803445be-4a19-4b62-bb2d-adbc1c8b3a11] Running
	I0717 00:08:08.737866   21245 system_pods.go:89] "kindnet-t5p77" [64ea96f1-5fab-40b2-a150-c72cd0f61dff] Running
	I0717 00:08:08.737871   21245 system_pods.go:89] "kube-apiserver-addons-957510" [23d09e74-2585-4bad-a247-5bd11626c398] Running
	I0717 00:08:08.737875   21245 system_pods.go:89] "kube-controller-manager-addons-957510" [1d67f06b-27c4-468e-8a35-a581d913ac10] Running
	I0717 00:08:08.737880   21245 system_pods.go:89] "kube-ingress-dns-minikube" [5e1c5890-c2f9-4c82-aa6c-8895839fcb19] Running
	I0717 00:08:08.737884   21245 system_pods.go:89] "kube-proxy-bvcbh" [6c52b57c-87eb-4842-a98f-48d9bd361f7b] Running
	I0717 00:08:08.737888   21245 system_pods.go:89] "kube-scheduler-addons-957510" [196ec5e6-6d64-4664-a494-23b5eb636cd3] Running
	I0717 00:08:08.737892   21245 system_pods.go:89] "metrics-server-c59844bb4-6hgp6" [40f452f3-f225-4b33-88fc-6a0362123620] Running
	I0717 00:08:08.737897   21245 system_pods.go:89] "nvidia-device-plugin-daemonset-vxl6w" [62fe154c-efaa-413e-90ec-020e5c5db0b7] Running
	I0717 00:08:08.737900   21245 system_pods.go:89] "registry-proxy-nqrkw" [23e004ae-eb71-4040-bb09-9a393ed5044a] Running
	I0717 00:08:08.737904   21245 system_pods.go:89] "registry-stqvk" [ab363c33-d118-4417-9ebe-8caaebc1efff] Running
	I0717 00:08:08.737908   21245 system_pods.go:89] "snapshot-controller-745499f584-9qb2w" [0e7210a7-2baa-4549-8515-5520d4d2ec1e] Running
	I0717 00:08:08.737911   21245 system_pods.go:89] "snapshot-controller-745499f584-qp49p" [725fc7fb-25ca-4913-a457-76c2f14a3fa9] Running
	I0717 00:08:08.737915   21245 system_pods.go:89] "storage-provisioner" [f782a017-1180-4eb6-8c64-0519925113e2] Running
	I0717 00:08:08.737920   21245 system_pods.go:89] "tiller-deploy-6677d64bcd-qmhpn" [dd11389b-b3d6-4f2a-b725-9f58dcbc7c1c] Running
	I0717 00:08:08.737926   21245 system_pods.go:126] duration metric: took 8.954809ms to wait for k8s-apps to be running ...
	I0717 00:08:08.737933   21245 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:08:08.737983   21245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:08:08.749145   21245 system_svc.go:56] duration metric: took 11.20132ms WaitForService to wait for kubelet
	I0717 00:08:08.749179   21245 kubeadm.go:582] duration metric: took 2m28.45895358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:08:08.749200   21245 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:08:08.752286   21245 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 00:08:08.752321   21245 node_conditions.go:123] node cpu capacity is 8
	I0717 00:08:08.752338   21245 node_conditions.go:105] duration metric: took 3.132147ms to run NodePressure ...
	I0717 00:08:08.752352   21245 start.go:241] waiting for startup goroutines ...
	I0717 00:08:08.752362   21245 start.go:246] waiting for cluster config update ...
	I0717 00:08:08.752385   21245 start.go:255] writing updated cluster config ...
	I0717 00:08:08.752754   21245 ssh_runner.go:195] Run: rm -f paused
	I0717 00:08:08.799014   21245 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:08:08.801132   21245 out.go:177] * Done! kubectl is now configured to use "addons-957510" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.166236992Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c88a7dce-c07f-4f3c-8710-00865c1ee0ea name=/runtime.v1.ImageService/ImageStatus
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.167008384Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=47993082-94d8-4fc6-b041-2cea836bb32c name=/runtime.v1.ImageService/ImageStatus
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.167682545Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=47993082-94d8-4fc6-b041-2cea836bb32c name=/runtime.v1.ImageService/ImageStatus
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.168545511Z" level=info msg="Creating container: default/hello-world-app-6778b5fc9f-gt5lx/hello-world-app" id=25b49bb4-46ac-4617-9284-9543f72ce800 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.168657050Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.182824623Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/240e07471510deacadf380a8d256f9a9f57adf1b53e396957e6437ac4809bade/merged/etc/passwd: no such file or directory"
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.182856598Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/240e07471510deacadf380a8d256f9a9f57adf1b53e396957e6437ac4809bade/merged/etc/group: no such file or directory"
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.215717892Z" level=info msg="Created container dab5ee9dc6842105888b7976a28ef0ee8c63752e36de3b1f0dbe9bf03ba183da: default/hello-world-app-6778b5fc9f-gt5lx/hello-world-app" id=25b49bb4-46ac-4617-9284-9543f72ce800 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.216404520Z" level=info msg="Starting container: dab5ee9dc6842105888b7976a28ef0ee8c63752e36de3b1f0dbe9bf03ba183da" id=84d7de95-2e73-44da-8685-fd87d48851de name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.221829834Z" level=info msg="Started container" PID=11194 containerID=dab5ee9dc6842105888b7976a28ef0ee8c63752e36de3b1f0dbe9bf03ba183da description=default/hello-world-app-6778b5fc9f-gt5lx/hello-world-app id=84d7de95-2e73-44da-8685-fd87d48851de name=/runtime.v1.RuntimeService/StartContainer sandboxID=33372cce1f7afe646cc1c7a211c83c1add71ecc905d6a5236f6726a828340d54
	Jul 17 00:11:04 addons-957510 crio[1030]: time="2024-07-17 00:11:04.935191783Z" level=info msg="Stopping container: a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed (timeout: 2s)" id=9cf87dc4-3376-464c-b21a-04d9ee26bfb7 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 00:11:06 addons-957510 crio[1030]: time="2024-07-17 00:11:06.941334177Z" level=warning msg="Stopping container a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=9cf87dc4-3376-464c-b21a-04d9ee26bfb7 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 00:11:06 addons-957510 conmon[6444]: conmon a881415601f376a5f674 <ninfo>: container 6456 exited with status 137
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.074226361Z" level=info msg="Stopped container a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed: ingress-nginx/ingress-nginx-controller-768f948f8f-8jqfn/controller" id=9cf87dc4-3376-464c-b21a-04d9ee26bfb7 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.074663978Z" level=info msg="Stopping pod sandbox: 181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af" id=f01d3d98-58a2-4e22-9dc2-098a3102aadb name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.077921030Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-WFYFQG5AKUK4KZ7M - [0:0]\n:KUBE-HP-KKU5VT2ZP47Z6UPV - [0:0]\n-X KUBE-HP-WFYFQG5AKUK4KZ7M\n-X KUBE-HP-KKU5VT2ZP47Z6UPV\nCOMMIT\n"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.079425805Z" level=info msg="Closing host port tcp:80"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.079466898Z" level=info msg="Closing host port tcp:443"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.080943944Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.080966994Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.081115837Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-8jqfn Namespace:ingress-nginx ID:181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af UID:05cf5423-75de-4998-b8b7-63cc9447eb68 NetNS:/var/run/netns/09fc1724-ca57-44b6-bc2c-76e8662a3c94 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.081237233Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-8jqfn from CNI network \"kindnet\" (type=ptp)"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.121510584Z" level=info msg="Stopped pod sandbox: 181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af" id=f01d3d98-58a2-4e22-9dc2-098a3102aadb name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.388298429Z" level=info msg="Removing container: a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed" id=bec18559-4b28-4b57-863e-387b7dbaef23 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.402361980Z" level=info msg="Removed container a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed: ingress-nginx/ingress-nginx-controller-768f948f8f-8jqfn/controller" id=bec18559-4b28-4b57-863e-387b7dbaef23 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dab5ee9dc6842       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   33372cce1f7af       hello-world-app-6778b5fc9f-gt5lx
	603cc2d5f3003       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   2e57fa18ce3b6       nginx
	6dbdb17793c74       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   6448fcef75de5       headlamp-7867546754-gqqd4
	07383d40a8a7b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 4 minutes ago       Running             gcp-auth                  0                   dbcf0316840ce       gcp-auth-5db96cd9b4-qp6rr
	cc99503030cd9       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              4 minutes ago       Running             yakd                      0                   8349a2c01eee4       yakd-dashboard-799879c74f-7m6rj
	b4543f1979531       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   81904c042f5f5       ingress-nginx-admission-patch-pzr2p
	d985f0edcf7e9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   0a9e19a37905a       ingress-nginx-admission-create-x7qqn
	cadab9d57975e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   32a0eadbc44dd       metrics-server-c59844bb4-6hgp6
	c393bc759d0e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   14f6496da41e2       storage-provisioner
	c567976e9c07b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   60ae80776905c       coredns-7db6d8ff4d-5wj8z
	0ceadb4c6599e       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115                           5 minutes ago       Running             kindnet-cni               0                   6bfd66e9c1ea4       kindnet-t5p77
	11425c4b5b25a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                             5 minutes ago       Running             kube-proxy                0                   ea91018cad8a3       kube-proxy-bvcbh
	2295ef488b3d8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                             5 minutes ago       Running             kube-scheduler            0                   3548dc8ed9025       kube-scheduler-addons-957510
	71595cce63070       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                             5 minutes ago       Running             kube-controller-manager   0                   89cd5b6944ecb       kube-controller-manager-addons-957510
	fe7b23d958f97       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   76b2554bb6499       etcd-addons-957510
	81a854553dec6       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                             5 minutes ago       Running             kube-apiserver            0                   d3468b84e0ee4       kube-apiserver-addons-957510
	
	
	==> coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] <==
	[INFO] 10.244.0.9:44353 - 45254 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099001s
	[INFO] 10.244.0.9:58287 - 23596 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004682751s
	[INFO] 10.244.0.9:58287 - 34608 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005348433s
	[INFO] 10.244.0.9:41381 - 10914 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005373308s
	[INFO] 10.244.0.9:41381 - 7325 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.023407916s
	[INFO] 10.244.0.9:45442 - 35194 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006324323s
	[INFO] 10.244.0.9:45442 - 20551 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006458581s
	[INFO] 10.244.0.9:43865 - 874 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066375s
	[INFO] 10.244.0.9:43865 - 31343 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091279s
	[INFO] 10.244.0.20:51353 - 49497 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196523s
	[INFO] 10.244.0.20:35332 - 13258 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000163174s
	[INFO] 10.244.0.20:43550 - 33118 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107325s
	[INFO] 10.244.0.20:38046 - 39500 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114093s
	[INFO] 10.244.0.20:36745 - 45526 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124052s
	[INFO] 10.244.0.20:55234 - 10191 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170774s
	[INFO] 10.244.0.20:52931 - 39191 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005902509s
	[INFO] 10.244.0.20:36858 - 59651 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007520362s
	[INFO] 10.244.0.20:48035 - 24244 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006161377s
	[INFO] 10.244.0.20:59410 - 60807 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007408859s
	[INFO] 10.244.0.20:53557 - 38929 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005649553s
	[INFO] 10.244.0.20:46788 - 4828 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00775879s
	[INFO] 10.244.0.20:44742 - 37096 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000923812s
	[INFO] 10.244.0.20:45602 - 45687 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00110212s
	[INFO] 10.244.0.26:55101 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000226076s
	[INFO] 10.244.0.26:40134 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147343s
	
	
	==> describe nodes <==
	Name:               addons-957510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-957510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=addons-957510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_05_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-957510
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:05:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-957510
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:11:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:09:00 +0000   Wed, 17 Jul 2024 00:05:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:09:00 +0000   Wed, 17 Jul 2024 00:05:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:09:00 +0000   Wed, 17 Jul 2024 00:05:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:09:00 +0000   Wed, 17 Jul 2024 00:05:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-957510
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859328Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859328Ki
	  pods:               110
	System Info:
	  Machine ID:                 657e3d13bd5d4ac4bc838c3d4cd57cc8
	  System UUID:                a3a08e87-d85e-4f7e-bd87-33ecbd5c47c7
	  Boot ID:                    3bd8d3e2-5698-4d65-8304-5a0a45a28197
	  Kernel Version:             5.15.0-1062-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-gt5lx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-qp6rr                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  headlamp                    headlamp-7867546754-gqqd4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                 coredns-7db6d8ff4d-5wj8z                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     5m32s
	  kube-system                 etcd-addons-957510                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m47s
	  kube-system                 kindnet-t5p77                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m32s
	  kube-system                 kube-apiserver-addons-957510             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-controller-manager-addons-957510    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 kube-proxy-bvcbh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-scheduler-addons-957510             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  kube-system                 metrics-server-c59844bb4-6hgp6           100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         5m27s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  yakd-dashboard              yakd-dashboard-799879c74f-7m6rj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             548Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m53s (x8 over 5m53s)  kubelet          Node addons-957510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s (x8 over 5m53s)  kubelet          Node addons-957510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x8 over 5m53s)  kubelet          Node addons-957510 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m47s                  kubelet          Node addons-957510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s                  kubelet          Node addons-957510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s                  kubelet          Node addons-957510 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m33s                  node-controller  Node addons-957510 event: Registered Node addons-957510 in Controller
	  Normal  NodeReady                5m13s                  kubelet          Node addons-957510 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000702] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000678] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000634] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.648033] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.057399] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.006733] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015635] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002903] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015150] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.938511] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 00:08] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[  +1.007400] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[  +2.015849] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[  +4.191720] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[Jul17 00:09] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[ +16.126848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[ +32.509623] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	
	
	==> etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] <==
	{"level":"info","ts":"2024-07-17T00:05:43.738988Z","caller":"traceutil/trace.go:171","msg":"trace[835334306] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"195.922364ms","start":"2024-07-17T00:05:43.543031Z","end":"2024-07-17T00:05:43.738953Z","steps":["trace[835334306] 'process raft request'  (duration: 94.450867ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.739505Z","caller":"traceutil/trace.go:171","msg":"trace[237447508] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"196.683009ms","start":"2024-07-17T00:05:43.542743Z","end":"2024-07-17T00:05:43.739426Z","steps":["trace[237447508] 'process raft request'  (duration: 78.807255ms)","trace[237447508] 'compare'  (duration: 15.631884ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.821325Z","caller":"traceutil/trace.go:171","msg":"trace[327262368] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"277.812357ms","start":"2024-07-17T00:05:43.54318Z","end":"2024-07-17T00:05:43.820992Z","steps":["trace[327262368] 'process raft request'  (duration: 94.44477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.824141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.34824ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128030573248648663 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95\" mod_revision:427 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95\" value_size:2367 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T00:05:43.825057Z","caller":"traceutil/trace.go:171","msg":"trace[1907663100] transaction","detail":"{read_only:false; number_of_response:1; response_revision:434; }","duration":"281.692283ms","start":"2024-07-17T00:05:43.543341Z","end":"2024-07-17T00:05:43.825033Z","steps":["trace[1907663100] 'process raft request'  (duration: 94.340421ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.831455Z","caller":"traceutil/trace.go:171","msg":"trace[318979519] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"288.022028ms","start":"2024-07-17T00:05:43.543389Z","end":"2024-07-17T00:05:43.831411Z","steps":["trace[318979519] 'process raft request'  (duration: 94.427699ms)","trace[318979519] 'store kv pair into bolt db' {req_type:put; key:/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95; req_size:2434; } (duration: 184.089855ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.84236Z","caller":"traceutil/trace.go:171","msg":"trace[1434870633] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"103.429161ms","start":"2024-07-17T00:05:43.738914Z","end":"2024-07-17T00:05:43.842343Z","steps":["trace[1434870633] 'process raft request'  (duration: 103.40138ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842686Z","caller":"traceutil/trace.go:171","msg":"trace[454952717] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"299.007266ms","start":"2024-07-17T00:05:43.543661Z","end":"2024-07-17T00:05:43.842668Z","steps":["trace[454952717] 'process raft request'  (duration: 281.365191ms)","trace[454952717] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/deployments/kube-system/coredns; req_size:4078; } (duration: 10.323103ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.842817Z","caller":"traceutil/trace.go:171","msg":"trace[555935544] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"119.221251ms","start":"2024-07-17T00:05:43.723587Z","end":"2024-07-17T00:05:43.842809Z","steps":["trace[555935544] 'process raft request'  (duration: 118.666651ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842834Z","caller":"traceutil/trace.go:171","msg":"trace[1039293278] linearizableReadLoop","detail":"{readStateIndex:448; appliedIndex:442; }","duration":"299.013709ms","start":"2024-07-17T00:05:43.54379Z","end":"2024-07-17T00:05:43.842804Z","steps":["trace[1039293278] 'read index received'  (duration: 77.767144ms)","trace[1039293278] 'applied index is now lower than readState.Index'  (duration: 221.245747ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.842891Z","caller":"traceutil/trace.go:171","msg":"trace[997092808] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"208.827109ms","start":"2024-07-17T00:05:43.634057Z","end":"2024-07-17T00:05:43.842884Z","steps":["trace[997092808] 'process raft request'  (duration: 208.107466ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842968Z","caller":"traceutil/trace.go:171","msg":"trace[774825181] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"104.562529ms","start":"2024-07-17T00:05:43.73839Z","end":"2024-07-17T00:05:43.842953Z","steps":["trace[774825181] 'process raft request'  (duration: 103.898216ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842999Z","caller":"traceutil/trace.go:171","msg":"trace[687420027] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"208.572081ms","start":"2024-07-17T00:05:43.634421Z","end":"2024-07-17T00:05:43.842993Z","steps":["trace[687420027] 'process raft request'  (duration: 207.797377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.843099Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.296486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-957510\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2024-07-17T00:05:43.843116Z","caller":"traceutil/trace.go:171","msg":"trace[696636387] range","detail":"{range_begin:/registry/minions/addons-957510; range_end:; response_count:1; response_revision:441; }","duration":"299.340331ms","start":"2024-07-17T00:05:43.54377Z","end":"2024-07-17T00:05:43.843111Z","steps":["trace[696636387] 'agreement among raft nodes before linearized reading'  (duration: 299.29242ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.934045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.077285ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:05:43.934103Z","caller":"traceutil/trace.go:171","msg":"trace[1245650603] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:442; }","duration":"299.19366ms","start":"2024-07-17T00:05:43.634897Z","end":"2024-07-17T00:05:43.934091Z","steps":["trace[1245650603] 'agreement among raft nodes before linearized reading'  (duration: 299.084082ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.934348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.541294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:05:43.934377Z","caller":"traceutil/trace.go:171","msg":"trace[1544716083] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:442; }","duration":"195.590938ms","start":"2024-07-17T00:05:43.738777Z","end":"2024-07-17T00:05:43.934368Z","steps":["trace[1544716083] 'agreement among raft nodes before linearized reading'  (duration: 195.545137ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.934491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.293363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-957510\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2024-07-17T00:05:43.934511Z","caller":"traceutil/trace.go:171","msg":"trace[998651653] range","detail":"{range_begin:/registry/minions/addons-957510; range_end:; response_count:1; response_revision:442; }","duration":"196.341559ms","start":"2024-07-17T00:05:43.738163Z","end":"2024-07-17T00:05:43.934505Z","steps":["trace[998651653] 'agreement among raft nodes before linearized reading'  (duration: 196.297152ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:44.125374Z","caller":"traceutil/trace.go:171","msg":"trace[1245597813] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"204.790646ms","start":"2024-07-17T00:05:43.920562Z","end":"2024-07-17T00:05:44.125352Z","steps":["trace[1245597813] 'process raft request'  (duration: 121.692323ms)","trace[1245597813] 'compare'  (duration: 82.542472ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:44.12553Z","caller":"traceutil/trace.go:171","msg":"trace[1227819910] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"196.243609ms","start":"2024-07-17T00:05:43.929278Z","end":"2024-07-17T00:05:44.125521Z","steps":["trace[1227819910] 'process raft request'  (duration: 195.607359ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:44.125736Z","caller":"traceutil/trace.go:171","msg":"trace[2108594412] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"196.297426ms","start":"2024-07-17T00:05:43.92943Z","end":"2024-07-17T00:05:44.125727Z","steps":["trace[2108594412] 'process raft request'  (duration: 195.490113ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:06:38.068874Z","caller":"traceutil/trace.go:171","msg":"trace[1933927538] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"111.538215ms","start":"2024-07-17T00:06:37.957314Z","end":"2024-07-17T00:06:38.068853Z","steps":["trace[1933927538] 'process raft request'  (duration: 111.358606ms)"],"step_count":1}
	
	
	==> gcp-auth [07383d40a8a7b73b7ea3ccd6187d01bf085eb24804678e4e29a79f346314be38] <==
	2024/07/17 00:06:41 GCP Auth Webhook started!
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:18 Ready to marshal response ...
	2024/07/17 00:08:18 Ready to write response ...
	2024/07/17 00:08:19 Ready to marshal response ...
	2024/07/17 00:08:19 Ready to write response ...
	2024/07/17 00:08:27 Ready to marshal response ...
	2024/07/17 00:08:27 Ready to write response ...
	2024/07/17 00:08:36 Ready to marshal response ...
	2024/07/17 00:08:36 Ready to write response ...
	2024/07/17 00:08:38 Ready to marshal response ...
	2024/07/17 00:08:38 Ready to write response ...
	2024/07/17 00:08:54 Ready to marshal response ...
	2024/07/17 00:08:54 Ready to write response ...
	2024/07/17 00:11:02 Ready to marshal response ...
	2024/07/17 00:11:02 Ready to write response ...
	
	
	==> kernel <==
	 00:11:12 up 53 min,  0 users,  load average: 0.38, 0.42, 0.26
	Linux addons-957510 5.15.0-1062-gcp #70~20.04.1-Ubuntu SMP Fri May 24 20:12:18 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] <==
	I0717 00:09:59.020794       1 main.go:303] handling current node
	W0717 00:10:07.889261       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:10:07.889298       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:10:08.997959       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:10:08.997990       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0717 00:10:09.021098       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:10:09.021133       1 main.go:303] handling current node
	I0717 00:10:19.020919       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:10:19.020969       1 main.go:303] handling current node
	I0717 00:10:29.020931       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:10:29.020969       1 main.go:303] handling current node
	I0717 00:10:39.021274       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:10:39.021312       1 main.go:303] handling current node
	W0717 00:10:44.270299       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:10:44.270335       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:10:44.662899       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:10:44.662935       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 00:10:49.020839       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:10:49.020881       1 main.go:303] handling current node
	I0717 00:10:59.020952       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:10:59.021003       1 main.go:303] handling current node
	W0717 00:11:02.753978       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:11:02.754014       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0717 00:11:09.021355       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:11:09.021392       1 main.go:303] handling current node
	
	
	==> kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] <==
	I0717 00:08:09.552827       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.182.225"}
	E0717 00:08:19.690623       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:08:19.696093       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:08:19.701484       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:08:30.115918       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.27:37296: read: connection reset by peer
	I0717 00:08:33.198954       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:08:34.216294       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0717 00:08:34.703200       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 00:08:38.675852       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:08:39.128551       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.224.235"}
	I0717 00:08:50.051692       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:09:10.424727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.424789       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.438908       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.439035       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.441821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.441857       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.450806       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.450848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.462314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.462349       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 00:09:11.442633       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:09:11.462497       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:09:11.470528       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:11:02.250843       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.232.111"}
	
	
	==> kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] <==
	W0717 00:09:46.417017       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:46.417054       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:47.279541       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:47.279581       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:09:57.174777       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:09:57.174819       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:11.621855       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:11.621888       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:24.818137       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:24.818171       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:42.487085       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:42.487124       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:10:45.318037       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:10:45.318076       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:11:02.104489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="15.642302ms"
	I0717 00:11:02.109690       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="5.146039ms"
	I0717 00:11:02.109765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="42.564µs"
	I0717 00:11:02.116506       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="40.142µs"
	I0717 00:11:03.902275       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0717 00:11:03.921705       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="8.68µs"
	I0717 00:11:03.923924       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0717 00:11:04.393998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="5.796874ms"
	I0717 00:11:04.394097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="59.001µs"
	W0717 00:11:07.240912       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:07.240945       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] <==
	I0717 00:05:43.346605       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:05:43.921652       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 00:05:44.631108       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 00:05:44.631234       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:05:44.638126       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 00:05:44.638211       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 00:05:44.638244       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:05:44.639060       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:05:44.639159       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:05:44.640478       1 config.go:192] "Starting service config controller"
	I0717 00:05:44.641310       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:05:44.720087       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:05:44.731825       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:05:44.731720       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:05:44.731911       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:05:44.720145       1 config.go:319] "Starting node config controller"
	I0717 00:05:44.732028       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:05:44.732035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] <==
	W0717 00:05:22.739208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:05:22.739225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:05:22.739267       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:22.739285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:23.564384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:05:23.564419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:05:23.611579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:23.611609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:23.747501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:05:23.747545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:05:23.758961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:05:23.759002       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:05:23.794976       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:23.795008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:23.831092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:05:23.831129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:05:23.866467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:05:23.866498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:05:23.936478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:05:23.936514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:05:23.939500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:05:23.939533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:05:24.015737       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:05:24.015780       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:05:26.137423       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:11:02 addons-957510 kubelet[1742]: I0717 00:11:02.108395    1742 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d3a254-9888-4984-999b-4320716ef437" containerName="csi-resizer"
	Jul 17 00:11:02 addons-957510 kubelet[1742]: I0717 00:11:02.227783    1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69hf\" (UniqueName: \"kubernetes.io/projected/edc57626-a087-4b9a-9efe-ad992aafb4fe-kube-api-access-d69hf\") pod \"hello-world-app-6778b5fc9f-gt5lx\" (UID: \"edc57626-a087-4b9a-9efe-ad992aafb4fe\") " pod="default/hello-world-app-6778b5fc9f-gt5lx"
	Jul 17 00:11:02 addons-957510 kubelet[1742]: I0717 00:11:02.227942    1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/edc57626-a087-4b9a-9efe-ad992aafb4fe-gcp-creds\") pod \"hello-world-app-6778b5fc9f-gt5lx\" (UID: \"edc57626-a087-4b9a-9efe-ad992aafb4fe\") " pod="default/hello-world-app-6778b5fc9f-gt5lx"
	Jul 17 00:11:03 addons-957510 kubelet[1742]: I0717 00:11:03.232333    1742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5m5s\" (UniqueName: \"kubernetes.io/projected/5e1c5890-c2f9-4c82-aa6c-8895839fcb19-kube-api-access-k5m5s\") pod \"5e1c5890-c2f9-4c82-aa6c-8895839fcb19\" (UID: \"5e1c5890-c2f9-4c82-aa6c-8895839fcb19\") "
	Jul 17 00:11:03 addons-957510 kubelet[1742]: I0717 00:11:03.234226    1742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1c5890-c2f9-4c82-aa6c-8895839fcb19-kube-api-access-k5m5s" (OuterVolumeSpecName: "kube-api-access-k5m5s") pod "5e1c5890-c2f9-4c82-aa6c-8895839fcb19" (UID: "5e1c5890-c2f9-4c82-aa6c-8895839fcb19"). InnerVolumeSpecName "kube-api-access-k5m5s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:11:03 addons-957510 kubelet[1742]: I0717 00:11:03.333508    1742 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k5m5s\" (UniqueName: \"kubernetes.io/projected/5e1c5890-c2f9-4c82-aa6c-8895839fcb19-kube-api-access-k5m5s\") on node \"addons-957510\" DevicePath \"\""
	Jul 17 00:11:03 addons-957510 kubelet[1742]: I0717 00:11:03.375516    1742 scope.go:117] "RemoveContainer" containerID="13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a"
	Jul 17 00:11:03 addons-957510 kubelet[1742]: I0717 00:11:03.390803    1742 scope.go:117] "RemoveContainer" containerID="13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a"
	Jul 17 00:11:03 addons-957510 kubelet[1742]: E0717 00:11:03.391209    1742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a\": container with ID starting with 13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a not found: ID does not exist" containerID="13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a"
	Jul 17 00:11:03 addons-957510 kubelet[1742]: I0717 00:11:03.391244    1742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a"} err="failed to get container status \"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a\": rpc error: code = NotFound desc = could not find container \"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a\": container with ID starting with 13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a not found: ID does not exist"
	Jul 17 00:11:04 addons-957510 kubelet[1742]: I0717 00:11:04.388328    1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-gt5lx" podStartSLOduration=0.672930016 podStartE2EDuration="2.388307869s" podCreationTimestamp="2024-07-17 00:11:02 +0000 UTC" firstStartedPulling="2024-07-17 00:11:02.451068172 +0000 UTC m=+337.449218358" lastFinishedPulling="2024-07-17 00:11:04.166446031 +0000 UTC m=+339.164596211" observedRunningTime="2024-07-17 00:11:04.388243632 +0000 UTC m=+339.386393828" watchObservedRunningTime="2024-07-17 00:11:04.388307869 +0000 UTC m=+339.386458065"
	Jul 17 00:11:05 addons-957510 kubelet[1742]: I0717 00:11:05.075904    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090a43c6-4407-4947-833f-5dd55a5864b6" path="/var/lib/kubelet/pods/090a43c6-4407-4947-833f-5dd55a5864b6/volumes"
	Jul 17 00:11:05 addons-957510 kubelet[1742]: I0717 00:11:05.076387    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1c5890-c2f9-4c82-aa6c-8895839fcb19" path="/var/lib/kubelet/pods/5e1c5890-c2f9-4c82-aa6c-8895839fcb19/volumes"
	Jul 17 00:11:05 addons-957510 kubelet[1742]: I0717 00:11:05.076761    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0202ebc-2668-47ee-baec-18ca041823e8" path="/var/lib/kubelet/pods/d0202ebc-2668-47ee-baec-18ca041823e8/volumes"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.257183    1742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05cf5423-75de-4998-b8b7-63cc9447eb68-webhook-cert\") pod \"05cf5423-75de-4998-b8b7-63cc9447eb68\" (UID: \"05cf5423-75de-4998-b8b7-63cc9447eb68\") "
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.257235    1742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74tvr\" (UniqueName: \"kubernetes.io/projected/05cf5423-75de-4998-b8b7-63cc9447eb68-kube-api-access-74tvr\") pod \"05cf5423-75de-4998-b8b7-63cc9447eb68\" (UID: \"05cf5423-75de-4998-b8b7-63cc9447eb68\") "
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.259065    1742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05cf5423-75de-4998-b8b7-63cc9447eb68-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "05cf5423-75de-4998-b8b7-63cc9447eb68" (UID: "05cf5423-75de-4998-b8b7-63cc9447eb68"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.259083    1742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cf5423-75de-4998-b8b7-63cc9447eb68-kube-api-access-74tvr" (OuterVolumeSpecName: "kube-api-access-74tvr") pod "05cf5423-75de-4998-b8b7-63cc9447eb68" (UID: "05cf5423-75de-4998-b8b7-63cc9447eb68"). InnerVolumeSpecName "kube-api-access-74tvr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.358406    1742 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-74tvr\" (UniqueName: \"kubernetes.io/projected/05cf5423-75de-4998-b8b7-63cc9447eb68-kube-api-access-74tvr\") on node \"addons-957510\" DevicePath \"\""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.358443    1742 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05cf5423-75de-4998-b8b7-63cc9447eb68-webhook-cert\") on node \"addons-957510\" DevicePath \"\""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.387239    1742 scope.go:117] "RemoveContainer" containerID="a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.402646    1742 scope.go:117] "RemoveContainer" containerID="a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: E0717 00:11:07.403040    1742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed\": container with ID starting with a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed not found: ID does not exist" containerID="a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.403080    1742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"} err="failed to get container status \"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed\": rpc error: code = NotFound desc = could not find container \"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed\": container with ID starting with a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed not found: ID does not exist"
	Jul 17 00:11:09 addons-957510 kubelet[1742]: I0717 00:11:09.074967    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cf5423-75de-4998-b8b7-63cc9447eb68" path="/var/lib/kubelet/pods/05cf5423-75de-4998-b8b7-63cc9447eb68/volumes"
	
	
	==> storage-provisioner [c393bc759d0e49a66ab4b193930997dd02bc7ce39cd86df897d0f5d1f06f8e65] <==
	I0717 00:05:59.936859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:05:59.944799       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:05:59.944849       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:05:59.951018       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:05:59.951129       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-957510_89e5a102-4f78-4267-b904-5270b75f732d!
	I0717 00:05:59.951130       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"306f74b4-2ed0-42d7-aa75-f318a87d8dcc", APIVersion:"v1", ResourceVersion:"933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-957510_89e5a102-4f78-4267-b904-5270b75f732d became leader
	I0717 00:06:00.051335       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-957510_89e5a102-4f78-4267-b904-5270b75f732d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-957510 -n addons-957510
helpers_test.go:261: (dbg) Run:  kubectl --context addons-957510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (307.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.4561ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-6hgp6" [40f452f3-f225-4b33-88fc-6a0362123620] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003957804s
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (64.336831ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 2m57.625166352s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (89.999209ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 3m0.932779539s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (66.746768ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 3m3.88462013s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (63.488054ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 3m13.320102664s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (61.788207ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 3m26.453157813s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (63.74745ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 3m40.439864415s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (63.858264ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 3m54.818121349s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (65.19902ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 4m31.956830731s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (61.391089ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 5m2.638710595s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (61.200727ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 5m43.308087768s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (60.793939ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 6m35.49292775s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-957510 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-957510 top pods -n kube-system: exit status 1 (60.098841ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-5wj8z, age: 7m57.179151077s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-957510
helpers_test.go:235: (dbg) docker inspect addons-957510:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1",
	        "Created": "2024-07-17T00:05:12.056413268Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21981,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T00:05:12.189522438Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8e13c0121d32d5213820fd1c1408d440c10e972c9e29d75579ef86b050a145b3",
	        "ResolvConfPath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/hosts",
	        "LogPath": "/var/lib/docker/containers/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1/6f98c2cd701a92574d84777fcc9070f65646182a4a52cf299b2b642b2bd3a7e1-json.log",
	        "Name": "/addons-957510",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-957510:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-957510",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd-init/diff:/var/lib/docker/overlay2/bb7af9236849a801cb258b267ec61d57df411fd5cfaae48b7e138223f703f6dd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fac6802473e8f33274b5d385582fe53c687eb00ff0ea03356b2b8448406e6cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-957510",
	                "Source": "/var/lib/docker/volumes/addons-957510/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-957510",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-957510",
	                "name.minikube.sigs.k8s.io": "addons-957510",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b163bd16f9d16f1ee01bfa65f772cad11a58d813969ba8e4e371703d8d58c98e",
	            "SandboxKey": "/var/run/docker/netns/b163bd16f9d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-957510": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "dc4d896cb023151875263d302f8a87f1c988b74f80c5bcec5ccaaa0ab83c7bdb",
	                    "EndpointID": "c132e65989466166285b8b470af8c94735ab9e2c922cfc679448e91546b7b799",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-957510",
	                        "6f98c2cd701a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-957510 -n addons-957510
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-957510 logs -n 25: (1.196771267s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-110186                                                                     | download-only-110186   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-874175                                                                     | download-only-874175   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | --download-only -p                                                                          | download-docker-079405 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | download-docker-079405                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-079405                                                                   | download-docker-079405 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-733705   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | binary-mirror-733705                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36213                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-733705                                                                     | binary-mirror-733705   | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| addons  | enable dashboard -p                                                                         | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-957510 --wait=true                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | -p addons-957510                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-957510 ssh cat                                                                       | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | /opt/local-path-provisioner/pvc-7a2029d1-4210-4ea3-8f80-a2f46d6b3dac_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| ip      | addons-957510 ip                                                                            | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | -p addons-957510                                                                            |                        |         |         |                     |                     |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC | 17 Jul 24 00:08 UTC |
	|         | addons-957510                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-957510 ssh curl -s                                                                   | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:08 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-957510 addons                                                                        | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-957510 addons                                                                        | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:09 UTC | 17 Jul 24 00:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-957510 ip                                                                            | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:11 UTC | 17 Jul 24 00:11 UTC |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:11 UTC | 17 Jul 24 00:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-957510 addons disable                                                                | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:11 UTC | 17 Jul 24 00:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-957510 addons                                                                        | addons-957510          | jenkins | v1.33.1 | 17 Jul 24 00:13 UTC | 17 Jul 24 00:13 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:50.010945   21245 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:50.011090   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:50.011103   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:50.011109   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:50.011336   21245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:04:50.011993   21245 out.go:298] Setting JSON to false
	I0717 00:04:50.012938   21245 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2837,"bootTime":1721171853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:50.013000   21245 start.go:139] virtualization: kvm guest
	I0717 00:04:50.015281   21245 out.go:177] * [addons-957510] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:50.017446   21245 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:04:50.017471   21245 notify.go:220] Checking for updates...
	I0717 00:04:50.020464   21245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:50.022122   21245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:04:50.023539   21245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:04:50.024988   21245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:04:50.026322   21245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:04:50.027842   21245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:50.048290   21245 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:04:50.048412   21245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:50.094091   21245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:04:50.085501517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:50.094194   21245 docker.go:307] overlay module found
	I0717 00:04:50.095971   21245 out.go:177] * Using the docker driver based on user configuration
	I0717 00:04:50.097177   21245 start.go:297] selected driver: docker
	I0717 00:04:50.097191   21245 start.go:901] validating driver "docker" against <nil>
	I0717 00:04:50.097200   21245 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:04:50.097944   21245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:50.142967   21245 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:04:50.134453498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:50.143121   21245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:50.143309   21245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:04:50.145135   21245 out.go:177] * Using Docker driver with root privileges
	I0717 00:04:50.146515   21245 cni.go:84] Creating CNI manager for ""
	I0717 00:04:50.146531   21245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:04:50.146543   21245 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:50.146612   21245 start.go:340] cluster config:
	{Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:50.148072   21245 out.go:177] * Starting "addons-957510" primary control-plane node in "addons-957510" cluster
	I0717 00:04:50.149424   21245 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:04:50.150683   21245 out.go:177] * Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:04:50.151998   21245 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:50.152027   21245 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:04:50.152042   21245 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:50.152054   21245 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:50.152177   21245 preload.go:172] Found /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 00:04:50.152189   21245 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:04:50.152542   21245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/config.json ...
	I0717 00:04:50.152565   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/config.json: {Name:mka71b6e573dc07c21b369acac427de301799e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:04:50.167697   21245 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:04:50.167829   21245 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:04:50.167850   21245 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:04:50.167859   21245 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:04:50.167869   21245 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:04:50.167896   21245 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from local cache
	I0717 00:05:03.377041   21245 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c from cached tarball
	I0717 00:05:03.377078   21245 cache.go:194] Successfully downloaded all kic artifacts
	I0717 00:05:03.377135   21245 start.go:360] acquireMachinesLock for addons-957510: {Name:mk80820d022b2d12c4a1887cc77d38b1c4a0f210 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:05:03.377238   21245 start.go:364] duration metric: took 84.656µs to acquireMachinesLock for "addons-957510"
	I0717 00:05:03.377259   21245 start.go:93] Provisioning new machine with config: &{Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:05:03.377330   21245 start.go:125] createHost starting for "" (driver="docker")
	I0717 00:05:03.468555   21245 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 00:05:03.468784   21245 start.go:159] libmachine.API.Create for "addons-957510" (driver="docker")
	I0717 00:05:03.468819   21245 client.go:168] LocalClient.Create starting
	I0717 00:05:03.468952   21245 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem
	I0717 00:05:03.562442   21245 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem
	I0717 00:05:03.730042   21245 cli_runner.go:164] Run: docker network inspect addons-957510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 00:05:03.746934   21245 cli_runner.go:211] docker network inspect addons-957510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 00:05:03.746999   21245 network_create.go:284] running [docker network inspect addons-957510] to gather additional debugging logs...
	I0717 00:05:03.747022   21245 cli_runner.go:164] Run: docker network inspect addons-957510
	W0717 00:05:03.762556   21245 cli_runner.go:211] docker network inspect addons-957510 returned with exit code 1
	I0717 00:05:03.762586   21245 network_create.go:287] error running [docker network inspect addons-957510]: docker network inspect addons-957510: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-957510 not found
	I0717 00:05:03.762601   21245 network_create.go:289] output of [docker network inspect addons-957510]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-957510 not found
	
	** /stderr **
	I0717 00:05:03.762693   21245 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:05:03.778988   21245 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b09bd0}
	I0717 00:05:03.779054   21245 network_create.go:124] attempt to create docker network addons-957510 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 00:05:03.779120   21245 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-957510 addons-957510
	I0717 00:05:04.109420   21245 network_create.go:108] docker network addons-957510 192.168.49.0/24 created
	I0717 00:05:04.109453   21245 kic.go:121] calculated static IP "192.168.49.2" for the "addons-957510" container
	I0717 00:05:04.109511   21245 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 00:05:04.124860   21245 cli_runner.go:164] Run: docker volume create addons-957510 --label name.minikube.sigs.k8s.io=addons-957510 --label created_by.minikube.sigs.k8s.io=true
	I0717 00:05:04.223797   21245 oci.go:103] Successfully created a docker volume addons-957510
	I0717 00:05:04.223896   21245 cli_runner.go:164] Run: docker run --rm --name addons-957510-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-957510 --entrypoint /usr/bin/test -v addons-957510:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib
	I0717 00:05:07.253702   21245 cli_runner.go:217] Completed: docker run --rm --name addons-957510-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-957510 --entrypoint /usr/bin/test -v addons-957510:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -d /var/lib: (3.029750681s)
	I0717 00:05:07.253729   21245 oci.go:107] Successfully prepared a docker volume addons-957510
	I0717 00:05:07.253749   21245 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:07.253775   21245 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 00:05:07.253829   21245 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-957510:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 00:05:11.990553   21245 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-957510:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c -I lz4 -xf /preloaded.tar -C /extractDir: (4.736690066s)
	I0717 00:05:11.990580   21245 kic.go:203] duration metric: took 4.736802613s to extract preloaded images to volume ...
	W0717 00:05:11.990708   21245 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 00:05:11.990835   21245 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 00:05:12.039324   21245 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-957510 --name addons-957510 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-957510 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-957510 --network addons-957510 --ip 192.168.49.2 --volume addons-957510:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c
	I0717 00:05:12.355099   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Running}}
	I0717 00:05:12.372938   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:12.392398   21245 cli_runner.go:164] Run: docker exec addons-957510 stat /var/lib/dpkg/alternatives/iptables
	I0717 00:05:12.435664   21245 oci.go:144] the created container "addons-957510" has a running status.
	I0717 00:05:12.435708   21245 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa...
	I0717 00:05:12.777218   21245 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 00:05:12.796294   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:12.814623   21245 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 00:05:12.814641   21245 kic_runner.go:114] Args: [docker exec --privileged addons-957510 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 00:05:12.862621   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:12.882016   21245 machine.go:94] provisionDockerMachine start ...
	I0717 00:05:12.882141   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:12.901768   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:12.902045   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:12.902068   21245 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:05:13.039331   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-957510
	
	I0717 00:05:13.039358   21245 ubuntu.go:169] provisioning hostname "addons-957510"
	I0717 00:05:13.039427   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.057879   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:13.058051   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:13.058067   21245 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-957510 && echo "addons-957510" | sudo tee /etc/hostname
	I0717 00:05:13.186294   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-957510
	
	I0717 00:05:13.186364   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.202822   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:13.203038   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:13.203055   21245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-957510' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-957510/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-957510' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:05:13.320085   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:05:13.320113   21245 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19265-12715/.minikube CaCertPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19265-12715/.minikube}
	I0717 00:05:13.320135   21245 ubuntu.go:177] setting up certificates
	I0717 00:05:13.320144   21245 provision.go:84] configureAuth start
	I0717 00:05:13.320189   21245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-957510
	I0717 00:05:13.336760   21245 provision.go:143] copyHostCerts
	I0717 00:05:13.336824   21245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19265-12715/.minikube/ca.pem (1082 bytes)
	I0717 00:05:13.336933   21245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19265-12715/.minikube/cert.pem (1123 bytes)
	I0717 00:05:13.336986   21245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19265-12715/.minikube/key.pem (1679 bytes)
	I0717 00:05:13.337033   21245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19265-12715/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca-key.pem org=jenkins.addons-957510 san=[127.0.0.1 192.168.49.2 addons-957510 localhost minikube]
	I0717 00:05:13.397464   21245 provision.go:177] copyRemoteCerts
	I0717 00:05:13.397516   21245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:05:13.397561   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.414687   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:13.504302   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:05:13.525684   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 00:05:13.547355   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:05:13.568924   21245 provision.go:87] duration metric: took 248.768454ms to configureAuth
	I0717 00:05:13.568948   21245 ubuntu.go:193] setting minikube options for container-runtime
	I0717 00:05:13.569130   21245 config.go:182] Loaded profile config "addons-957510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:13.569236   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.585863   21245 main.go:141] libmachine: Using SSH client type: native
	I0717 00:05:13.586033   21245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0717 00:05:13.586050   21245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 00:05:13.791805   21245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 00:05:13.791838   21245 machine.go:97] duration metric: took 909.784997ms to provisionDockerMachine
	I0717 00:05:13.791851   21245 client.go:171] duration metric: took 10.323025732s to LocalClient.Create
	I0717 00:05:13.791907   21245 start.go:167] duration metric: took 10.32309443s to libmachine.API.Create "addons-957510"
	I0717 00:05:13.791919   21245 start.go:293] postStartSetup for "addons-957510" (driver="docker")
	I0717 00:05:13.791937   21245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:05:13.792020   21245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:05:13.792065   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.809908   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:13.896338   21245 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:05:13.899282   21245 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 00:05:13.899314   21245 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 00:05:13.899335   21245 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 00:05:13.899345   21245 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 00:05:13.899357   21245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12715/.minikube/addons for local assets ...
	I0717 00:05:13.899407   21245 filesync.go:126] Scanning /home/jenkins/minikube-integration/19265-12715/.minikube/files for local assets ...
	I0717 00:05:13.899430   21245 start.go:296] duration metric: took 107.503118ms for postStartSetup
	I0717 00:05:13.899706   21245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-957510
	I0717 00:05:13.916357   21245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/config.json ...
	I0717 00:05:13.916591   21245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:05:13.916635   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:13.934259   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:14.016539   21245 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 00:05:14.020627   21245 start.go:128] duration metric: took 10.643282827s to createHost
	I0717 00:05:14.020650   21245 start.go:83] releasing machines lock for "addons-957510", held for 10.643401438s
	I0717 00:05:14.020726   21245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-957510
	I0717 00:05:14.037196   21245 ssh_runner.go:195] Run: cat /version.json
	I0717 00:05:14.037234   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:14.037280   21245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 00:05:14.037348   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:14.053571   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:14.054408   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:14.210243   21245 ssh_runner.go:195] Run: systemctl --version
	I0717 00:05:14.214433   21245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 00:05:14.349623   21245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 00:05:14.353727   21245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:05:14.371504   21245 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 00:05:14.371585   21245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:05:14.397629   21245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 00:05:14.397649   21245 start.go:495] detecting cgroup driver to use...
	I0717 00:05:14.397675   21245 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:05:14.397726   21245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 00:05:14.410562   21245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 00:05:14.420099   21245 docker.go:217] disabling cri-docker service (if available) ...
	I0717 00:05:14.420153   21245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 00:05:14.431492   21245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 00:05:14.444529   21245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 00:05:14.517624   21245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 00:05:14.593371   21245 docker.go:233] disabling docker service ...
	I0717 00:05:14.593440   21245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 00:05:14.610877   21245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 00:05:14.621166   21245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 00:05:14.691936   21245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 00:05:14.769151   21245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 00:05:14.779432   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:05:14.793650   21245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 00:05:14.793706   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.802382   21245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 00:05:14.802446   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.811114   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.819989   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.829025   21245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:05:14.837521   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.846991   21245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.861793   21245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 00:05:14.870980   21245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:05:14.879352   21245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:05:14.887526   21245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:14.965017   21245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 00:05:15.061104   21245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 00:05:15.061167   21245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 00:05:15.064389   21245 start.go:563] Will wait 60s for crictl version
	I0717 00:05:15.064443   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:05:15.067475   21245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:05:15.101126   21245 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 00:05:15.101237   21245 ssh_runner.go:195] Run: crio --version
	I0717 00:05:15.134531   21245 ssh_runner.go:195] Run: crio --version
	I0717 00:05:15.169419   21245 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0717 00:05:15.170913   21245 cli_runner.go:164] Run: docker network inspect addons-957510 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:05:15.187540   21245 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 00:05:15.191202   21245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:15.201539   21245 kubeadm.go:883] updating cluster {Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:05:15.201659   21245 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:05:15.201707   21245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:15.262252   21245 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:05:15.262273   21245 crio.go:433] Images already preloaded, skipping extraction
	I0717 00:05:15.262313   21245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 00:05:15.293794   21245 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 00:05:15.293813   21245 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:05:15.293820   21245 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0717 00:05:15.293900   21245 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-957510 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:05:15.293967   21245 ssh_runner.go:195] Run: crio config
	I0717 00:05:15.334804   21245 cni.go:84] Creating CNI manager for ""
	I0717 00:05:15.334825   21245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:15.334839   21245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:05:15.334860   21245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-957510 NodeName:addons-957510 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:05:15.334992   21245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-957510"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:05:15.335052   21245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:05:15.344067   21245 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:05:15.344130   21245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:05:15.352485   21245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0717 00:05:15.368837   21245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:05:15.385751   21245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0717 00:05:15.402881   21245 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 00:05:15.406261   21245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:05:15.416360   21245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:15.493106   21245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:15.505127   21245 certs.go:68] Setting up /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510 for IP: 192.168.49.2
	I0717 00:05:15.505151   21245 certs.go:194] generating shared ca certs ...
	I0717 00:05:15.505166   21245 certs.go:226] acquiring lock for ca certs: {Name:mk4aaa9cd83a5144bc0eaf83922d126bac8dea0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.505284   21245 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key
	I0717 00:05:15.554459   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt ...
	I0717 00:05:15.554485   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt: {Name:mkafd762b74e91501469150fd7dec47494e5a802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.554636   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key ...
	I0717 00:05:15.554647   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key: {Name:mkf91c539d4b21ac62d660b304b2c0b65b6fafbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.554721   21245 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key
	I0717 00:05:15.651335   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.crt ...
	I0717 00:05:15.651368   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.crt: {Name:mk849693914080d208b7a0bb1b7eedd342e5c5d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.651551   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key ...
	I0717 00:05:15.651565   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key: {Name:mk30e94cb3d1c7c38b4e620d8835d82d0a2962e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.651655   21245 certs.go:256] generating profile certs ...
	I0717 00:05:15.651721   21245 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.key
	I0717 00:05:15.651743   21245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt with IP's: []
	I0717 00:05:15.717124   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt ...
	I0717 00:05:15.717166   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: {Name:mka919a48dee2862c11a053e2f7c8d1c5d4e9aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.717362   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.key ...
	I0717 00:05:15.717377   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.key: {Name:mk8b1a754a7516e074e7acb2d70958123f670a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.717474   21245 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab
	I0717 00:05:15.717497   21245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0717 00:05:15.836544   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab ...
	I0717 00:05:15.836577   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab: {Name:mkb4edbae1a4b51e3798c09f2e57c052f997d26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.836756   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab ...
	I0717 00:05:15.836770   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab: {Name:mka61dc78c62d1b78f6cfaf6e64458f43e24daf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.836843   21245 certs.go:381] copying /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt.9d9c2dab -> /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt
	I0717 00:05:15.836923   21245 certs.go:385] copying /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key.9d9c2dab -> /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key
	I0717 00:05:15.836966   21245 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key
	I0717 00:05:15.836982   21245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt with IP's: []
	I0717 00:05:15.910467   21245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt ...
	I0717 00:05:15.910493   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt: {Name:mkc23b3355ac79021789571e1065eafc3b48c365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.910639   21245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key ...
	I0717 00:05:15.910649   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key: {Name:mk782db588d186f30c2fff8f1973a8e6902f62f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:15.910803   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 00:05:15.910833   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/ca.pem (1082 bytes)
	I0717 00:05:15.910857   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/cert.pem (1123 bytes)
	I0717 00:05:15.910878   21245 certs.go:484] found cert: /home/jenkins/minikube-integration/19265-12715/.minikube/certs/key.pem (1679 bytes)
	I0717 00:05:15.911398   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:05:15.932393   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 00:05:15.952472   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:05:15.972940   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 00:05:15.993113   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:05:16.013591   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:05:16.034284   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:05:16.054438   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 00:05:16.074376   21245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:05:16.094556   21245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:05:16.109302   21245 ssh_runner.go:195] Run: openssl version
	I0717 00:05:16.114114   21245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:05:16.122967   21245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:16.126103   21245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:05 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:16.126148   21245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:05:16.132287   21245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:05:16.140054   21245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:05:16.142796   21245 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:05:16.142846   21245 kubeadm.go:392] StartCluster: {Name:addons-957510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-957510 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:05:16.142917   21245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 00:05:16.142952   21245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 00:05:16.174043   21245 cri.go:89] found id: ""
	I0717 00:05:16.174105   21245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:05:16.182223   21245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:05:16.189918   21245 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 00:05:16.189992   21245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:05:16.199229   21245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:05:16.199246   21245 kubeadm.go:157] found existing configuration files:
	
	I0717 00:05:16.199293   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:05:16.206780   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:05:16.206830   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:05:16.214660   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:05:16.222332   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:05:16.222382   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:05:16.230312   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:05:16.237960   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:05:16.238020   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:05:16.245872   21245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:05:16.253325   21245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:05:16.253367   21245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:05:16.260549   21245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 00:05:16.332726   21245 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1062-gcp\n", err: exit status 1
	I0717 00:05:16.384020   21245 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:05:25.834697   21245 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:05:25.834776   21245 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:05:25.834908   21245 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0717 00:05:25.834989   21245 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1062-gcp
	I0717 00:05:25.835056   21245 kubeadm.go:310] OS: Linux
	I0717 00:05:25.835120   21245 kubeadm.go:310] CGROUPS_CPU: enabled
	I0717 00:05:25.835171   21245 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0717 00:05:25.835255   21245 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0717 00:05:25.835337   21245 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0717 00:05:25.835407   21245 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0717 00:05:25.835473   21245 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0717 00:05:25.835550   21245 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0717 00:05:25.835631   21245 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0717 00:05:25.835710   21245 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0717 00:05:25.835815   21245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:05:25.835975   21245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:05:25.836059   21245 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:05:25.836111   21245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:05:25.838000   21245 out.go:204]   - Generating certificates and keys ...
	I0717 00:05:25.838084   21245 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:05:25.838143   21245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:05:25.838211   21245 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:05:25.838278   21245 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:05:25.838360   21245 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:05:25.838436   21245 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:05:25.838511   21245 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:05:25.838668   21245 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-957510 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:05:25.838742   21245 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:05:25.838896   21245 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-957510 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:05:25.838987   21245 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:05:25.839056   21245 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:05:25.839098   21245 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:05:25.839157   21245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:05:25.839205   21245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:05:25.839253   21245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:05:25.839304   21245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:05:25.839357   21245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:05:25.839412   21245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:05:25.839479   21245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:05:25.839533   21245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:05:25.841019   21245 out.go:204]   - Booting up control plane ...
	I0717 00:05:25.841128   21245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:05:25.841199   21245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:05:25.841255   21245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:05:25.841344   21245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:05:25.841439   21245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:05:25.841489   21245 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:05:25.841617   21245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:05:25.841679   21245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:05:25.841731   21245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.112151ms
	I0717 00:05:25.841793   21245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:05:25.841843   21245 kubeadm.go:310] [api-check] The API server is healthy after 4.502007969s
	I0717 00:05:25.841940   21245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:05:25.842053   21245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:05:25.842113   21245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:05:25.842273   21245 kubeadm.go:310] [mark-control-plane] Marking the node addons-957510 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:05:25.842336   21245 kubeadm.go:310] [bootstrap-token] Using token: pl3pji.fe9z3wlbs9jxiyvg
	I0717 00:05:25.843974   21245 out.go:204]   - Configuring RBAC rules ...
	I0717 00:05:25.844090   21245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:05:25.844195   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:05:25.844327   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:05:25.844532   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:05:25.844646   21245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:05:25.844729   21245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:05:25.844836   21245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:05:25.844889   21245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:05:25.844936   21245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:05:25.844943   21245 kubeadm.go:310] 
	I0717 00:05:25.844999   21245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:05:25.845008   21245 kubeadm.go:310] 
	I0717 00:05:25.845085   21245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:05:25.845093   21245 kubeadm.go:310] 
	I0717 00:05:25.845114   21245 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:05:25.845163   21245 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:05:25.845211   21245 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:05:25.845217   21245 kubeadm.go:310] 
	I0717 00:05:25.845266   21245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:05:25.845272   21245 kubeadm.go:310] 
	I0717 00:05:25.845311   21245 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:05:25.845316   21245 kubeadm.go:310] 
	I0717 00:05:25.845365   21245 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:05:25.845442   21245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:05:25.845499   21245 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:05:25.845505   21245 kubeadm.go:310] 
	I0717 00:05:25.845580   21245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:05:25.845671   21245 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:05:25.845685   21245 kubeadm.go:310] 
	I0717 00:05:25.845793   21245 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pl3pji.fe9z3wlbs9jxiyvg \
	I0717 00:05:25.845887   21245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:daf389ec49e00d61976d9dc190f73df8121e276c738a86d1ec306a03abd6f344 \
	I0717 00:05:25.845912   21245 kubeadm.go:310] 	--control-plane 
	I0717 00:05:25.845919   21245 kubeadm.go:310] 
	I0717 00:05:25.845991   21245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:05:25.845998   21245 kubeadm.go:310] 
	I0717 00:05:25.846069   21245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pl3pji.fe9z3wlbs9jxiyvg \
	I0717 00:05:25.846184   21245 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:daf389ec49e00d61976d9dc190f73df8121e276c738a86d1ec306a03abd6f344 
	I0717 00:05:25.846200   21245 cni.go:84] Creating CNI manager for ""
	I0717 00:05:25.846211   21245 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:05:25.848001   21245 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 00:05:25.849232   21245 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 00:05:25.852856   21245 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 00:05:25.852879   21245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 00:05:25.869210   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 00:05:26.065353   21245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:05:26.065449   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:26.065454   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-957510 minikube.k8s.io/updated_at=2024_07_17T00_05_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91 minikube.k8s.io/name=addons-957510 minikube.k8s.io/primary=true
	I0717 00:05:26.072284   21245 ops.go:34] apiserver oom_adj: -16
	I0717 00:05:26.224718   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:26.724773   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:27.225516   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:27.724909   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:28.225121   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:28.725077   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:29.225763   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:29.725169   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:30.225573   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:30.725487   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:31.224864   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:31.725124   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:32.225774   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:32.725129   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:33.225121   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:33.725690   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:34.225180   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:34.725112   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:35.224843   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:35.725088   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.225749   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:36.725093   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.224825   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:37.724807   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.225750   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:38.724748   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.224931   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:39.724783   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.225523   21245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:05:40.289315   21245 kubeadm.go:1113] duration metric: took 14.223923759s to wait for elevateKubeSystemPrivileges
	I0717 00:05:40.289352   21245 kubeadm.go:394] duration metric: took 24.14650761s to StartCluster
	I0717 00:05:40.289372   21245 settings.go:142] acquiring lock: {Name:mk9a09422d46b143eae10f5996fa2de67145de97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:40.289483   21245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:05:40.289964   21245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/kubeconfig: {Name:mkf7e1e083f0112534ba419cb3d886353389254d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:05:40.290197   21245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 00:05:40.290238   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:05:40.290319   21245 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:05:40.290390   21245 config.go:182] Loaded profile config "addons-957510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:40.290415   21245 addons.go:69] Setting ingress-dns=true in profile "addons-957510"
	I0717 00:05:40.290428   21245 addons.go:69] Setting helm-tiller=true in profile "addons-957510"
	I0717 00:05:40.290430   21245 addons.go:69] Setting metrics-server=true in profile "addons-957510"
	I0717 00:05:40.290439   21245 addons.go:69] Setting gcp-auth=true in profile "addons-957510"
	I0717 00:05:40.290452   21245 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-957510"
	I0717 00:05:40.290461   21245 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-957510"
	I0717 00:05:40.290462   21245 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-957510"
	I0717 00:05:40.290467   21245 addons.go:69] Setting registry=true in profile "addons-957510"
	I0717 00:05:40.290470   21245 mustload.go:65] Loading cluster: addons-957510
	I0717 00:05:40.290480   21245 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-957510"
	I0717 00:05:40.290473   21245 addons.go:69] Setting volcano=true in profile "addons-957510"
	I0717 00:05:40.290488   21245 addons.go:69] Setting volumesnapshots=true in profile "addons-957510"
	I0717 00:05:40.290506   21245 addons.go:234] Setting addon volcano=true in "addons-957510"
	I0717 00:05:40.290461   21245 addons.go:69] Setting storage-provisioner=true in profile "addons-957510"
	I0717 00:05:40.290515   21245 addons.go:234] Setting addon volumesnapshots=true in "addons-957510"
	I0717 00:05:40.290526   21245 addons.go:234] Setting addon storage-provisioner=true in "addons-957510"
	I0717 00:05:40.290539   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290539   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290552   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290454   21245 addons.go:234] Setting addon helm-tiller=true in "addons-957510"
	I0717 00:05:40.290592   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290676   21245 config.go:182] Loaded profile config "addons-957510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:05:40.290813   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290983   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291012   21245 addons.go:69] Setting ingress=true in profile "addons-957510"
	I0717 00:05:40.291058   21245 addons.go:234] Setting addon ingress=true in "addons-957510"
	I0717 00:05:40.291097   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290433   21245 addons.go:69] Setting default-storageclass=true in profile "addons-957510"
	I0717 00:05:40.291133   21245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-957510"
	I0717 00:05:40.290506   21245 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-957510"
	I0717 00:05:40.291181   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290483   21245 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-957510"
	I0717 00:05:40.291235   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.291398   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291522   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291601   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290456   21245 addons.go:234] Setting addon metrics-server=true in "addons-957510"
	I0717 00:05:40.291647   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.291665   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.290455   21245 addons.go:234] Setting addon ingress-dns=true in "addons-957510"
	I0717 00:05:40.292058   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.292111   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290413   21245 addons.go:69] Setting yakd=true in profile "addons-957510"
	I0717 00:05:40.292591   21245 addons.go:234] Setting addon yakd=true in "addons-957510"
	I0717 00:05:40.292619   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290983   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290484   21245 addons.go:234] Setting addon registry=true in "addons-957510"
	I0717 00:05:40.293178   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.293859   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.292628   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.294920   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290985   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290993   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.296935   21245 out.go:177] * Verifying Kubernetes components...
	I0717 00:05:40.290448   21245 addons.go:69] Setting cloud-spanner=true in profile "addons-957510"
	I0717 00:05:40.297142   21245 addons.go:234] Setting addon cloud-spanner=true in "addons-957510"
	I0717 00:05:40.297186   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.297688   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290422   21245 addons.go:69] Setting inspektor-gadget=true in profile "addons-957510"
	I0717 00:05:40.298417   21245 addons.go:234] Setting addon inspektor-gadget=true in "addons-957510"
	I0717 00:05:40.298479   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.299023   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.290991   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.308189   21245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:05:40.332163   21245 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-957510"
	I0717 00:05:40.332209   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.332658   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.337398   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:05:40.338952   21245 addons.go:234] Setting addon default-storageclass=true in "addons-957510"
	I0717 00:05:40.338996   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.339434   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:40.345522   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:05:40.347275   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:05:40.347306   21245 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:05:40.347371   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.347562   21245 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:05:40.345531   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:05:40.349426   21245 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:40.349445   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:05:40.349503   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.352017   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:05:40.353580   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:05:40.354963   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 00:05:40.356351   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:05:40.357606   21245 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:05:40.357693   21245 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:05:40.357763   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 00:05:40.359435   21245 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:05:40.359466   21245 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:05:40.359487   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:05:40.359507   21245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:05:40.359539   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.359566   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.361600   21245 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:05:40.363142   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:05:40.363166   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:05:40.363227   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.363389   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:05:40.365573   21245 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:05:40.367040   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:40.367305   21245 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:40.367336   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:05:40.367406   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.377906   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:40.382199   21245 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:40.382234   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:05:40.382296   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.391074   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:40.402013   21245 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:05:40.402053   21245 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:05:40.403588   21245 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:40.403610   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:05:40.403671   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.403903   21245 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:05:40.404077   21245 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:40.404089   21245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:05:40.404136   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.405905   21245 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:05:40.405922   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:05:40.405965   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.407111   21245 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 00:05:40.408687   21245 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 00:05:40.408707   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 00:05:40.408771   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	W0717 00:05:40.411030   21245 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 00:05:40.418281   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.426810   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.446440   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.450468   21245 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:05:40.450521   21245 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:05:40.452726   21245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:40.452749   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:05:40.452808   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.452726   21245 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:05:40.454267   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.455222   21245 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:40.455239   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:05:40.455298   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.455524   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.455543   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.469550   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.471817   21245 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:05:40.473451   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:05:40.473475   21245 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:05:40.473542   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:40.479011   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.481412   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.486103   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.486187   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.490302   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.496652   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:40.496754   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	W0717 00:05:40.528216   21245 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:05:40.528253   21245 retry.go:31] will retry after 360.056519ms: ssh: handshake failed: EOF
	W0717 00:05:40.528291   21245 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:05:40.528310   21245 retry.go:31] will retry after 261.845108ms: ssh: handshake failed: EOF
	I0717 00:05:40.624237   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:05:40.735370   21245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:05:40.821077   21245 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:05:40.821161   21245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:05:40.839961   21245 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:05:40.840009   21245 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:05:40.922005   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:05:40.922096   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:05:40.940726   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:05:40.941452   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:05:41.023206   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:05:41.023635   21245 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 00:05:41.023661   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:05:41.023660   21245 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 00:05:41.023696   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:05:41.025458   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:05:41.035913   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:05:41.035943   21245 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:05:41.043287   21245 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:05:41.043375   21245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:05:41.044905   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:05:41.126630   21245 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:05:41.126712   21245 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:05:41.129838   21245 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:05:41.129860   21245 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:05:41.322275   21245 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:05:41.322314   21245 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:05:41.324830   21245 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:05:41.324855   21245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:05:41.325742   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:05:41.325767   21245 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:05:41.332315   21245 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:41.332350   21245 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 00:05:41.420488   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:05:41.420521   21245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:05:41.421123   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:05:41.421188   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:05:41.422239   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:05:41.439076   21245 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:41.439169   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:05:41.530779   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:05:41.530810   21245 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:05:41.621367   21245 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:05:41.621396   21245 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:05:41.621634   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:05:41.621649   21245 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:05:41.631199   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:05:41.643551   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:05:41.721350   21245 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:41.721380   21245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:05:41.723105   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:05:41.723178   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:05:41.833302   21245 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:41.833382   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:05:41.835963   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:05:41.926884   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:05:41.930449   21245 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:05:41.930477   21245 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:05:41.939152   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:05:41.939181   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:05:42.222224   21245 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:05:42.222270   21245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:05:42.222489   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:05:42.234279   21245 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:42.234307   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:05:42.528377   21245 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:05:42.528411   21245 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:05:42.631382   21245 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.007099195s)
	I0717 00:05:42.631418   21245 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 00:05:42.632650   21245 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.897249827s)
	I0717 00:05:42.633565   21245 node_ready.go:35] waiting up to 6m0s for node "addons-957510" to be "Ready" ...
	I0717 00:05:42.734503   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:42.921477   21245 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:42.921573   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:05:42.921942   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:05:42.921987   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:05:43.321374   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:05:43.333438   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:05:43.333468   21245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:05:43.336006   21245 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-957510" context rescaled to 1 replicas
	I0717 00:05:43.839919   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:05:43.840033   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:05:44.034734   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:05:44.034825   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:05:44.322435   21245 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:44.322513   21245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:05:44.426203   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.485372961s)
	I0717 00:05:44.426368   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.484870972s)
	I0717 00:05:44.426436   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.403199819s)
	I0717 00:05:44.426479   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.400981247s)
	I0717 00:05:44.521366   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:05:44.723432   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:46.821237   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.776281242s)
	I0717 00:05:46.821403   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.177821841s)
	I0717 00:05:46.821410   21245 addons.go:475] Verifying addon ingress=true in "addons-957510"
	I0717 00:05:46.821471   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.98547469s)
	I0717 00:05:46.821375   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.190125888s)
	I0717 00:05:46.821564   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.894647637s)
	I0717 00:05:46.821582   21245 addons.go:475] Verifying addon metrics-server=true in "addons-957510"
	I0717 00:05:46.821499   21245 addons.go:475] Verifying addon registry=true in "addons-957510"
	I0717 00:05:46.821651   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.59911204s)
	I0717 00:05:46.821276   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.398974705s)
	I0717 00:05:46.823383   21245 out.go:177] * Verifying registry addon...
	I0717 00:05:46.823385   21245 out.go:177] * Verifying ingress addon...
	I0717 00:05:46.824385   21245 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-957510 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:05:46.826030   21245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:05:46.826942   21245 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:05:46.832535   21245 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:05:46.832563   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:46.832844   21245 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:05:46.832866   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:47.137167   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:47.333560   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:47.334170   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:47.546384   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.81183177s)
	W0717 00:05:47.546431   21245 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:47.546445   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.224966953s)
	I0717 00:05:47.546453   21245 retry.go:31] will retry after 277.065396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:05:47.625437   21245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:05:47.625515   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:47.651370   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:47.823717   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:05:47.831339   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:47.832445   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:47.842012   21245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:05:47.931226   21245 addons.go:234] Setting addon gcp-auth=true in "addons-957510"
	I0717 00:05:47.931288   21245 host.go:66] Checking if "addons-957510" exists ...
	I0717 00:05:47.931805   21245 cli_runner.go:164] Run: docker container inspect addons-957510 --format={{.State.Status}}
	I0717 00:05:47.960041   21245 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:05:47.960084   21245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-957510
	I0717 00:05:47.977681   21245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/addons-957510/id_rsa Username:docker}
	I0717 00:05:48.330394   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:48.334118   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:48.439918   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.918410304s)
	I0717 00:05:48.439964   21245 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-957510"
	I0717 00:05:48.442652   21245 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:05:48.444562   21245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:05:48.451481   21245 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:05:48.451505   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:48.830207   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:48.830238   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:48.948603   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:49.329954   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:49.330327   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:49.448271   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:49.636987   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:49.830924   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:49.831554   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:49.950228   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:50.333816   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:50.334737   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:50.449485   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:50.830582   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:50.830627   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:50.948855   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:51.048818   21245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.225053499s)
	I0717 00:05:51.048996   21245 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.088924153s)
	I0717 00:05:51.051611   21245 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:05:51.053398   21245 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:05:51.054868   21245 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:05:51.054889   21245 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:05:51.073647   21245 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:05:51.073672   21245 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:05:51.122467   21245 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:51.122492   21245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:05:51.142149   21245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:05:51.330163   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:51.331105   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:51.449925   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:51.638982   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:51.747966   21245 addons.go:475] Verifying addon gcp-auth=true in "addons-957510"
	I0717 00:05:51.749181   21245 out.go:177] * Verifying gcp-auth addon...
	I0717 00:05:51.751337   21245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:05:51.755324   21245 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:05:51.755345   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:51.830473   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:51.830658   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:51.949614   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:52.254558   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:52.330686   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:52.331939   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:52.449541   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:52.754786   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:52.830481   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:52.831091   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:52.948782   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:53.254939   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:53.329925   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:53.330211   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:53.448274   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:53.754431   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:53.829876   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:53.830323   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:53.948276   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:54.136733   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:54.254303   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:54.329541   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:54.329952   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:54.448962   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:54.755254   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:54.830269   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:54.830695   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:54.948844   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:55.254971   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:55.329899   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:55.329988   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:55.448840   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:55.755271   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:55.830223   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:55.830223   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:55.948408   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:56.136818   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:56.254330   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:56.329487   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:56.330375   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:56.448715   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:56.754908   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:56.830036   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:56.830171   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:56.948406   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:57.253969   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:57.329967   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:57.330149   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:57.450347   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:57.754049   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:57.829859   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:57.830001   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:57.948369   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:58.136994   21245 node_ready.go:53] node "addons-957510" has status "Ready":"False"
	I0717 00:05:58.254489   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:58.330032   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.330482   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:58.448203   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:58.754197   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:58.830521   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:58.830626   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:58.948971   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:59.138393   21245 node_ready.go:49] node "addons-957510" has status "Ready":"True"
	I0717 00:05:59.138422   21245 node_ready.go:38] duration metric: took 16.504830191s for node "addons-957510" to be "Ready" ...
	I0717 00:05:59.138435   21245 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:05:59.149562   21245 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5wj8z" in "kube-system" namespace to be "Ready" ...
	I0717 00:05:59.254345   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:59.331442   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.332157   21245 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:05:59.332179   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.450066   21245 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:05:59.450094   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:05:59.755180   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:05:59.832570   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:05:59.834009   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:05:59.950506   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.254372   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:00.330308   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.330405   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.449312   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:00.654274   21245 pod_ready.go:92] pod "coredns-7db6d8ff4d-5wj8z" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.654298   21245 pod_ready.go:81] duration metric: took 1.504708039s for pod "coredns-7db6d8ff4d-5wj8z" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.654326   21245 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.657975   21245 pod_ready.go:92] pod "etcd-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.657995   21245 pod_ready.go:81] duration metric: took 3.660239ms for pod "etcd-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.658009   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.661675   21245 pod_ready.go:92] pod "kube-apiserver-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.661693   21245 pod_ready.go:81] duration metric: took 3.676497ms for pod "kube-apiserver-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.661703   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.665215   21245 pod_ready.go:92] pod "kube-controller-manager-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.665233   21245 pod_ready.go:81] duration metric: took 3.522159ms for pod "kube-controller-manager-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.665243   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bvcbh" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.736325   21245 pod_ready.go:92] pod "kube-proxy-bvcbh" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:00.736345   21245 pod_ready.go:81] duration metric: took 71.096153ms for pod "kube-proxy-bvcbh" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.736355   21245 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:00.754844   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:00.830768   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:00.831169   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:00.950039   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.137128   21245 pod_ready.go:92] pod "kube-scheduler-addons-957510" in "kube-system" namespace has status "Ready":"True"
	I0717 00:06:01.137149   21245 pod_ready.go:81] duration metric: took 400.788339ms for pod "kube-scheduler-addons-957510" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:01.137159   21245 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace to be "Ready" ...
	I0717 00:06:01.255202   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.331365   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.331506   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.450481   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:01.754954   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:01.830500   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:01.830653   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:01.954936   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.326797   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.334465   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:02.335639   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.524700   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:02.754987   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:02.837098   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:02.838232   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.027494   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.143030   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:03.255526   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.331241   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.331741   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.450124   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:03.755017   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:03.830999   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:03.831292   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:03.950023   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.255624   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.330884   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.331113   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.450006   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:04.754902   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:04.830770   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:04.830892   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:04.950240   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.144002   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:05.255579   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.331425   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.331538   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.450547   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:05.755020   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:05.830877   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:05.833001   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:05.950197   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.255180   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.330733   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.331085   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.450290   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:06.755775   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:06.831623   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:06.832064   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:06.951191   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.254828   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.330809   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.330896   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.450409   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:07.642459   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:07.755173   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:07.832818   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:07.832970   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:07.950195   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.254777   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.331105   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.331331   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:08.451295   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:08.755668   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:08.830875   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:08.831415   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.024729   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.254913   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.331352   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.331545   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.450970   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:09.642953   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:09.755185   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:09.831255   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:09.831308   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:09.949609   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.254232   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.330894   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.331148   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.449579   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:10.755476   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:10.830745   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:10.832312   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:10.950316   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.255564   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.331609   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.331744   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.451629   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:11.645832   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:11.754465   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:11.830689   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:11.830947   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:11.949932   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.254823   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.331200   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.331632   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.449937   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:12.754760   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:12.830708   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:12.831008   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:12.950092   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.255080   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:13.330630   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.330882   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:13.450067   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:13.754989   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:13.830909   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:13.831073   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:13.950222   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.143250   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:14.256478   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.330371   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.330423   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.449388   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:14.754822   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:14.830685   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:14.831407   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:14.950934   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.255567   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.330984   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.331226   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.450756   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:15.754796   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:15.830733   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:15.831115   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:15.949289   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.254579   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.330852   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.330882   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.450030   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:16.642044   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:16.754779   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:16.830856   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:16.830909   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:16.950257   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.255116   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.330750   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.330928   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.451240   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:17.754986   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:17.830900   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:17.830981   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:17.951246   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.323618   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.335537   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:18.336563   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.527237   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:18.726747   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:18.823253   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:18.840076   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:18.841236   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.026725   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.255056   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.332099   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.333460   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.453834   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:19.754939   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:19.831362   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:19.832922   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:19.953115   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.255213   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.331181   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.331284   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.450284   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:20.755310   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:20.831468   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:20.832905   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:20.950812   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.144028   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:21.254929   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.331109   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.331360   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.450015   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:21.755391   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:21.831260   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:21.831385   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:21.950230   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.255364   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.331136   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.331288   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.450207   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:22.755098   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:22.831270   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:22.831567   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:22.951305   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.255303   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.330389   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.330966   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.450876   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:23.643273   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:23.755209   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:23.831959   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:23.833263   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:23.950238   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.255124   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.330988   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.331203   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.450277   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:24.755001   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:24.830809   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:24.831099   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:24.949616   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.255583   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.331497   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.331603   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.450098   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:25.644095   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:25.755446   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:25.831364   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:25.832893   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:25.950450   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.254643   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.330690   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.331434   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.452061   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:26.755258   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:26.830811   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:26.831087   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:26.950047   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.255116   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.330624   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.330665   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:06:27.450395   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:27.646433   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:27.755428   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:27.830953   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:27.831596   21245 kapi.go:107] duration metric: took 41.005565184s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:06:27.952827   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.254780   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.331385   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.449193   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:28.755091   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:28.831199   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:28.950348   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.254841   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.331596   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.449886   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:29.755541   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:29.831474   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:29.950891   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.143254   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:30.255974   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.332060   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.451187   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:30.754846   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:30.832198   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:30.949829   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.254883   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.331015   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.449893   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:31.754807   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:31.831754   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:31.949567   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.143907   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:32.254613   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.331329   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.449806   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:32.755216   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:32.831434   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:32.949273   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.254668   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.330809   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.449694   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:33.754996   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:33.831528   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:33.949803   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.255316   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.331863   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.450138   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:34.642782   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:34.754572   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:34.830952   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:34.949731   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.254907   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.331386   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.450081   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:35.754802   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:35.831046   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:35.950258   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.256504   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.331488   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.451183   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:36.644033   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:36.754874   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:36.831031   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:36.950748   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.255412   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.331291   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.453741   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:37.755159   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:37.831789   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:37.951728   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.255496   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.332068   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.450049   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:38.755065   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:38.831274   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:38.950415   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.143108   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:39.254782   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.331465   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.449097   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:39.754847   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:39.831293   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:39.950081   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.255444   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.331928   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.450048   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:40.755063   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:40.831661   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:40.949194   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.255587   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.331252   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.451187   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:41.644528   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:41.754954   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:41.832019   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:41.950596   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.254797   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:06:42.331863   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.449568   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:42.755550   21245 kapi.go:107] duration metric: took 51.004207672s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:06:42.758354   21245 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-957510 cluster.
	I0717 00:06:42.760116   21245 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:06:42.761511   21245 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:06:42.831181   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:42.949777   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.330742   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.456434   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:43.831775   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:43.949989   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.142204   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:44.330626   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.448982   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:44.831203   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:44.950305   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:45.331273   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:45.451039   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:45.831310   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:45.949582   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.143335   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:46.330469   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.449195   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:46.831258   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:46.950217   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.331065   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.449905   21245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:06:47.830579   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:47.949310   21245 kapi.go:107] duration metric: took 59.504747866s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:06:48.331078   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:48.642815   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:48.831257   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.331100   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:49.830795   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.331397   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:50.831085   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.143018   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:51.331437   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:51.831437   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.330930   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:52.831153   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.143596   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:53.331003   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:53.831296   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.331449   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:54.831075   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.331430   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:55.642791   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:55.833095   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.331236   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:56.831085   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.331035   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:57.831476   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.142863   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:06:58.331417   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:58.830738   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.330749   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:06:59.830895   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.143338   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:00.330566   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:00.830711   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.330563   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:01.831410   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.143439   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:02.330982   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:02.830807   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.330808   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:03.831304   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.143736   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:04.332241   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:04.832348   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.331212   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:05.831715   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.143832   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:06.331036   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:06.831462   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.331049   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:07.830992   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.330678   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:08.642074   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:08.831397   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.331164   21245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:07:09.830721   21245 kapi.go:107] duration metric: took 1m23.003777151s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:07:09.832590   21245 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, helm-tiller, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0717 00:07:09.834005   21245 addons.go:510] duration metric: took 1m29.543687281s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin default-storageclass helm-tiller metrics-server storage-provisioner yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0717 00:07:10.642455   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:12.643827   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:15.142476   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:17.142634   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:19.143006   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:21.642798   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:24.142365   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:26.142821   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:28.644691   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:31.142168   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:33.143631   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:35.643532   21245 pod_ready.go:102] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"False"
	I0717 00:07:36.142452   21245 pod_ready.go:92] pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:36.142475   21245 pod_ready.go:81] duration metric: took 1m35.005309819s for pod "metrics-server-c59844bb4-6hgp6" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:36.142485   21245 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vxl6w" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:36.146675   21245 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-vxl6w" in "kube-system" namespace has status "Ready":"True"
	I0717 00:07:36.146695   21245 pod_ready.go:81] duration metric: took 4.20394ms for pod "nvidia-device-plugin-daemonset-vxl6w" in "kube-system" namespace to be "Ready" ...
	I0717 00:07:36.146716   21245 pod_ready.go:38] duration metric: took 1m37.008269238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:07:36.146731   21245 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:07:36.146758   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:07:36.146804   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:07:36.180581   21245 cri.go:89] found id: "81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:36.180605   21245 cri.go:89] found id: ""
	I0717 00:07:36.180614   21245 logs.go:276] 1 containers: [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef]
	I0717 00:07:36.180670   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.183910   21245 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:07:36.183977   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:07:36.216421   21245 cri.go:89] found id: "fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:36.216447   21245 cri.go:89] found id: ""
	I0717 00:07:36.216457   21245 logs.go:276] 1 containers: [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba]
	I0717 00:07:36.216505   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.219574   21245 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:07:36.219624   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:07:36.251233   21245 cri.go:89] found id: "c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:36.251257   21245 cri.go:89] found id: ""
	I0717 00:07:36.251266   21245 logs.go:276] 1 containers: [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a]
	I0717 00:07:36.251307   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.254411   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:07:36.254459   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:07:36.286533   21245 cri.go:89] found id: "2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:36.286560   21245 cri.go:89] found id: ""
	I0717 00:07:36.286570   21245 logs.go:276] 1 containers: [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd]
	I0717 00:07:36.286621   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.289736   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:07:36.289798   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:07:36.323138   21245 cri.go:89] found id: "11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:36.323171   21245 cri.go:89] found id: ""
	I0717 00:07:36.323181   21245 logs.go:276] 1 containers: [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b]
	I0717 00:07:36.323229   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.326540   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:07:36.326606   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:07:36.358699   21245 cri.go:89] found id: "71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:36.358726   21245 cri.go:89] found id: ""
	I0717 00:07:36.358736   21245 logs.go:276] 1 containers: [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6]
	I0717 00:07:36.358779   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.361860   21245 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:07:36.361921   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:07:36.394291   21245 cri.go:89] found id: "0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:36.394311   21245 cri.go:89] found id: ""
	I0717 00:07:36.394318   21245 logs.go:276] 1 containers: [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5]
	I0717 00:07:36.394371   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:36.397695   21245 logs.go:123] Gathering logs for kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] ...
	I0717 00:07:36.397729   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:36.442031   21245 logs.go:123] Gathering logs for etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] ...
	I0717 00:07:36.442067   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:36.487909   21245 logs.go:123] Gathering logs for coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] ...
	I0717 00:07:36.487943   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:36.523250   21245 logs.go:123] Gathering logs for kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] ...
	I0717 00:07:36.523279   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:36.566977   21245 logs.go:123] Gathering logs for kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] ...
	I0717 00:07:36.567017   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:36.600786   21245 logs.go:123] Gathering logs for kubelet ...
	I0717 00:07:36.600811   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 00:07:36.622962   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.128879    1742 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623157   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.128994    1742 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623341   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129050    1742 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623554   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623751   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.623991   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.624189   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.624394   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:36.665853   21245 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:07:36.665886   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:07:36.762597   21245 logs.go:123] Gathering logs for kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] ...
	I0717 00:07:36.762650   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:36.802172   21245 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:07:36.802206   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:07:36.878845   21245 logs.go:123] Gathering logs for container status ...
	I0717 00:07:36.878880   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:07:36.921163   21245 logs.go:123] Gathering logs for dmesg ...
	I0717 00:07:36.921192   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:07:36.933602   21245 logs.go:123] Gathering logs for kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] ...
	I0717 00:07:36.933639   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:36.989012   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:36.989040   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 00:07:36.989096   21245 out.go:239] X Problems detected in kubelet:
	W0717 00:07:36.989112   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989128   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989142   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989154   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:36.989165   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:36.989175   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:36.989182   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:07:46.989712   21245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:07:47.003539   21245 api_server.go:72] duration metric: took 2m6.713307726s to wait for apiserver process to appear ...
	I0717 00:07:47.003570   21245 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:07:47.003624   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:07:47.003729   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:07:47.039149   21245 cri.go:89] found id: "81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:47.039173   21245 cri.go:89] found id: ""
	I0717 00:07:47.039182   21245 logs.go:276] 1 containers: [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef]
	I0717 00:07:47.039238   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.042577   21245 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:07:47.042640   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:07:47.078678   21245 cri.go:89] found id: "fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:47.078729   21245 cri.go:89] found id: ""
	I0717 00:07:47.078738   21245 logs.go:276] 1 containers: [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba]
	I0717 00:07:47.078790   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.082431   21245 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:07:47.082495   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:07:47.116999   21245 cri.go:89] found id: "c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:47.117024   21245 cri.go:89] found id: ""
	I0717 00:07:47.117031   21245 logs.go:276] 1 containers: [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a]
	I0717 00:07:47.117080   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.120325   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:07:47.120383   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:07:47.152967   21245 cri.go:89] found id: "2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:47.152990   21245 cri.go:89] found id: ""
	I0717 00:07:47.152997   21245 logs.go:276] 1 containers: [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd]
	I0717 00:07:47.153039   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.156347   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:07:47.156406   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:07:47.188920   21245 cri.go:89] found id: "11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:47.188940   21245 cri.go:89] found id: ""
	I0717 00:07:47.188948   21245 logs.go:276] 1 containers: [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b]
	I0717 00:07:47.188993   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.192211   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:07:47.192284   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:07:47.224803   21245 cri.go:89] found id: "71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:47.224825   21245 cri.go:89] found id: ""
	I0717 00:07:47.224832   21245 logs.go:276] 1 containers: [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6]
	I0717 00:07:47.224879   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.228032   21245 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:07:47.228082   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:07:47.260640   21245 cri.go:89] found id: "0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:47.260659   21245 cri.go:89] found id: ""
	I0717 00:07:47.260665   21245 logs.go:276] 1 containers: [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5]
	I0717 00:07:47.260733   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:47.264035   21245 logs.go:123] Gathering logs for kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] ...
	I0717 00:07:47.264062   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:47.307542   21245 logs.go:123] Gathering logs for etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] ...
	I0717 00:07:47.307581   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:47.352648   21245 logs.go:123] Gathering logs for coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] ...
	I0717 00:07:47.352683   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:47.388745   21245 logs.go:123] Gathering logs for kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] ...
	I0717 00:07:47.388778   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:47.422861   21245 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:07:47.422896   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:07:47.526454   21245 logs.go:123] Gathering logs for dmesg ...
	I0717 00:07:47.526488   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:07:47.538917   21245 logs.go:123] Gathering logs for kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] ...
	I0717 00:07:47.538947   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:47.581561   21245 logs.go:123] Gathering logs for kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] ...
	I0717 00:07:47.581594   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:47.634834   21245 logs.go:123] Gathering logs for kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] ...
	I0717 00:07:47.634869   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:47.676127   21245 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:07:47.676157   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:07:47.754501   21245 logs.go:123] Gathering logs for container status ...
	I0717 00:07:47.754541   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:07:47.797074   21245 logs.go:123] Gathering logs for kubelet ...
	I0717 00:07:47.797103   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 00:07:47.823378   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.128879    1742 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.823577   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.128994    1742 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.823770   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129050    1742 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.823994   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824188   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824406   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824599   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.824812   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:47.866461   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:47.866498   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 00:07:47.866552   21245 out.go:239] X Problems detected in kubelet:
	W0717 00:07:47.866562   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866572   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866583   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866592   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:47.866603   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:47.866610   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:47.866617   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:07:57.868434   21245 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 00:07:57.872226   21245 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 00:07:57.873112   21245 api_server.go:141] control plane version: v1.30.2
	I0717 00:07:57.873137   21245 api_server.go:131] duration metric: took 10.869560009s to wait for apiserver health ...
	I0717 00:07:57.873147   21245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:07:57.873170   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 00:07:57.873225   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 00:07:57.908153   21245 cri.go:89] found id: "81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:57.908178   21245 cri.go:89] found id: ""
	I0717 00:07:57.908187   21245 logs.go:276] 1 containers: [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef]
	I0717 00:07:57.908243   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:57.911546   21245 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 00:07:57.911616   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 00:07:57.946468   21245 cri.go:89] found id: "fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:57.946493   21245 cri.go:89] found id: ""
	I0717 00:07:57.946500   21245 logs.go:276] 1 containers: [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba]
	I0717 00:07:57.946544   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:57.949901   21245 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 00:07:57.949957   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 00:07:57.983995   21245 cri.go:89] found id: "c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:57.984023   21245 cri.go:89] found id: ""
	I0717 00:07:57.984032   21245 logs.go:276] 1 containers: [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a]
	I0717 00:07:57.984096   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:57.987384   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 00:07:57.987442   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 00:07:58.022253   21245 cri.go:89] found id: "2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:58.022278   21245 cri.go:89] found id: ""
	I0717 00:07:58.022287   21245 logs.go:276] 1 containers: [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd]
	I0717 00:07:58.022341   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.025883   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 00:07:58.025946   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 00:07:58.060200   21245 cri.go:89] found id: "11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:58.060226   21245 cri.go:89] found id: ""
	I0717 00:07:58.060237   21245 logs.go:276] 1 containers: [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b]
	I0717 00:07:58.060288   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.063427   21245 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 00:07:58.063486   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 00:07:58.096823   21245 cri.go:89] found id: "71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:58.096842   21245 cri.go:89] found id: ""
	I0717 00:07:58.096849   21245 logs.go:276] 1 containers: [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6]
	I0717 00:07:58.096893   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.100054   21245 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 00:07:58.100105   21245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 00:07:58.132179   21245 cri.go:89] found id: "0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:58.132202   21245 cri.go:89] found id: ""
	I0717 00:07:58.132213   21245 logs.go:276] 1 containers: [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5]
	I0717 00:07:58.132263   21245 ssh_runner.go:195] Run: which crictl
	I0717 00:07:58.135395   21245 logs.go:123] Gathering logs for kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] ...
	I0717 00:07:58.135419   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6"
	I0717 00:07:58.190400   21245 logs.go:123] Gathering logs for container status ...
	I0717 00:07:58.190435   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 00:07:58.232614   21245 logs.go:123] Gathering logs for coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] ...
	I0717 00:07:58.232645   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a"
	I0717 00:07:58.268084   21245 logs.go:123] Gathering logs for kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] ...
	I0717 00:07:58.268115   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd"
	I0717 00:07:58.308206   21245 logs.go:123] Gathering logs for kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] ...
	I0717 00:07:58.308242   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b"
	I0717 00:07:58.342386   21245 logs.go:123] Gathering logs for kubelet ...
	I0717 00:07:58.342419   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 00:07:58.367512   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.128879    1742 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.367685   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.128994    1742 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.367825   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129050    1742 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368007   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368145   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368295   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368427   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.368576   21245 logs.go:138] Found kubelet problem: Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:58.410033   21245 logs.go:123] Gathering logs for dmesg ...
	I0717 00:07:58.410082   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 00:07:58.422489   21245 logs.go:123] Gathering logs for describe nodes ...
	I0717 00:07:58.422519   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 00:07:58.516296   21245 logs.go:123] Gathering logs for kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] ...
	I0717 00:07:58.516325   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef"
	I0717 00:07:58.559240   21245 logs.go:123] Gathering logs for etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] ...
	I0717 00:07:58.559276   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba"
	I0717 00:07:58.602314   21245 logs.go:123] Gathering logs for kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] ...
	I0717 00:07:58.602346   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5"
	I0717 00:07:58.641290   21245 logs.go:123] Gathering logs for CRI-O ...
	I0717 00:07:58.641322   21245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 00:07:58.715089   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:58.715136   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 00:07:58.715211   21245 out.go:239] X Problems detected in kubelet:
	W0717 00:07:58.715227   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129063    1742 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715238   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129108    1742 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715255   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129124    1742 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715267   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: W0717 00:05:59.129353    1742 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	W0717 00:07:58.715277   21245 out.go:239]   Jul 17 00:05:59 addons-957510 kubelet[1742]: E0717 00:05:59.129371    1742 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-957510" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-957510' and this object
	I0717 00:07:58.715288   21245 out.go:304] Setting ErrFile to fd 2...
	I0717 00:07:58.715298   21245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:08:08.726850   21245 system_pods.go:59] 19 kube-system pods found
	I0717 00:08:08.726881   21245 system_pods.go:61] "coredns-7db6d8ff4d-5wj8z" [ebab405b-8b19-41b4-9ade-70d1f44663f0] Running
	I0717 00:08:08.726886   21245 system_pods.go:61] "csi-hostpath-attacher-0" [5119bf74-f492-4daa-b7a6-c340cefcd844] Running
	I0717 00:08:08.726890   21245 system_pods.go:61] "csi-hostpath-resizer-0" [60d3a254-9888-4984-999b-4320716ef437] Running
	I0717 00:08:08.726893   21245 system_pods.go:61] "csi-hostpathplugin-bwnfc" [113aede1-ee6e-49c7-8b2a-fe74ff0c0c03] Running
	I0717 00:08:08.726896   21245 system_pods.go:61] "etcd-addons-957510" [803445be-4a19-4b62-bb2d-adbc1c8b3a11] Running
	I0717 00:08:08.726900   21245 system_pods.go:61] "kindnet-t5p77" [64ea96f1-5fab-40b2-a150-c72cd0f61dff] Running
	I0717 00:08:08.726903   21245 system_pods.go:61] "kube-apiserver-addons-957510" [23d09e74-2585-4bad-a247-5bd11626c398] Running
	I0717 00:08:08.726906   21245 system_pods.go:61] "kube-controller-manager-addons-957510" [1d67f06b-27c4-468e-8a35-a581d913ac10] Running
	I0717 00:08:08.726910   21245 system_pods.go:61] "kube-ingress-dns-minikube" [5e1c5890-c2f9-4c82-aa6c-8895839fcb19] Running
	I0717 00:08:08.726913   21245 system_pods.go:61] "kube-proxy-bvcbh" [6c52b57c-87eb-4842-a98f-48d9bd361f7b] Running
	I0717 00:08:08.726917   21245 system_pods.go:61] "kube-scheduler-addons-957510" [196ec5e6-6d64-4664-a494-23b5eb636cd3] Running
	I0717 00:08:08.726921   21245 system_pods.go:61] "metrics-server-c59844bb4-6hgp6" [40f452f3-f225-4b33-88fc-6a0362123620] Running
	I0717 00:08:08.726924   21245 system_pods.go:61] "nvidia-device-plugin-daemonset-vxl6w" [62fe154c-efaa-413e-90ec-020e5c5db0b7] Running
	I0717 00:08:08.726931   21245 system_pods.go:61] "registry-proxy-nqrkw" [23e004ae-eb71-4040-bb09-9a393ed5044a] Running
	I0717 00:08:08.726934   21245 system_pods.go:61] "registry-stqvk" [ab363c33-d118-4417-9ebe-8caaebc1efff] Running
	I0717 00:08:08.726937   21245 system_pods.go:61] "snapshot-controller-745499f584-9qb2w" [0e7210a7-2baa-4549-8515-5520d4d2ec1e] Running
	I0717 00:08:08.726940   21245 system_pods.go:61] "snapshot-controller-745499f584-qp49p" [725fc7fb-25ca-4913-a457-76c2f14a3fa9] Running
	I0717 00:08:08.726943   21245 system_pods.go:61] "storage-provisioner" [f782a017-1180-4eb6-8c64-0519925113e2] Running
	I0717 00:08:08.726948   21245 system_pods.go:61] "tiller-deploy-6677d64bcd-qmhpn" [dd11389b-b3d6-4f2a-b725-9f58dcbc7c1c] Running
	I0717 00:08:08.726953   21245 system_pods.go:74] duration metric: took 10.853800724s to wait for pod list to return data ...
	I0717 00:08:08.726961   21245 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:08:08.728940   21245 default_sa.go:45] found service account: "default"
	I0717 00:08:08.728959   21245 default_sa.go:55] duration metric: took 1.990301ms for default service account to be created ...
	I0717 00:08:08.728967   21245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:08:08.737814   21245 system_pods.go:86] 19 kube-system pods found
	I0717 00:08:08.737843   21245 system_pods.go:89] "coredns-7db6d8ff4d-5wj8z" [ebab405b-8b19-41b4-9ade-70d1f44663f0] Running
	I0717 00:08:08.737849   21245 system_pods.go:89] "csi-hostpath-attacher-0" [5119bf74-f492-4daa-b7a6-c340cefcd844] Running
	I0717 00:08:08.737853   21245 system_pods.go:89] "csi-hostpath-resizer-0" [60d3a254-9888-4984-999b-4320716ef437] Running
	I0717 00:08:08.737858   21245 system_pods.go:89] "csi-hostpathplugin-bwnfc" [113aede1-ee6e-49c7-8b2a-fe74ff0c0c03] Running
	I0717 00:08:08.737862   21245 system_pods.go:89] "etcd-addons-957510" [803445be-4a19-4b62-bb2d-adbc1c8b3a11] Running
	I0717 00:08:08.737866   21245 system_pods.go:89] "kindnet-t5p77" [64ea96f1-5fab-40b2-a150-c72cd0f61dff] Running
	I0717 00:08:08.737871   21245 system_pods.go:89] "kube-apiserver-addons-957510" [23d09e74-2585-4bad-a247-5bd11626c398] Running
	I0717 00:08:08.737875   21245 system_pods.go:89] "kube-controller-manager-addons-957510" [1d67f06b-27c4-468e-8a35-a581d913ac10] Running
	I0717 00:08:08.737880   21245 system_pods.go:89] "kube-ingress-dns-minikube" [5e1c5890-c2f9-4c82-aa6c-8895839fcb19] Running
	I0717 00:08:08.737884   21245 system_pods.go:89] "kube-proxy-bvcbh" [6c52b57c-87eb-4842-a98f-48d9bd361f7b] Running
	I0717 00:08:08.737888   21245 system_pods.go:89] "kube-scheduler-addons-957510" [196ec5e6-6d64-4664-a494-23b5eb636cd3] Running
	I0717 00:08:08.737892   21245 system_pods.go:89] "metrics-server-c59844bb4-6hgp6" [40f452f3-f225-4b33-88fc-6a0362123620] Running
	I0717 00:08:08.737897   21245 system_pods.go:89] "nvidia-device-plugin-daemonset-vxl6w" [62fe154c-efaa-413e-90ec-020e5c5db0b7] Running
	I0717 00:08:08.737900   21245 system_pods.go:89] "registry-proxy-nqrkw" [23e004ae-eb71-4040-bb09-9a393ed5044a] Running
	I0717 00:08:08.737904   21245 system_pods.go:89] "registry-stqvk" [ab363c33-d118-4417-9ebe-8caaebc1efff] Running
	I0717 00:08:08.737908   21245 system_pods.go:89] "snapshot-controller-745499f584-9qb2w" [0e7210a7-2baa-4549-8515-5520d4d2ec1e] Running
	I0717 00:08:08.737911   21245 system_pods.go:89] "snapshot-controller-745499f584-qp49p" [725fc7fb-25ca-4913-a457-76c2f14a3fa9] Running
	I0717 00:08:08.737915   21245 system_pods.go:89] "storage-provisioner" [f782a017-1180-4eb6-8c64-0519925113e2] Running
	I0717 00:08:08.737920   21245 system_pods.go:89] "tiller-deploy-6677d64bcd-qmhpn" [dd11389b-b3d6-4f2a-b725-9f58dcbc7c1c] Running
	I0717 00:08:08.737926   21245 system_pods.go:126] duration metric: took 8.954809ms to wait for k8s-apps to be running ...
	I0717 00:08:08.737933   21245 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:08:08.737983   21245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:08:08.749145   21245 system_svc.go:56] duration metric: took 11.20132ms WaitForService to wait for kubelet
	I0717 00:08:08.749179   21245 kubeadm.go:582] duration metric: took 2m28.45895358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:08:08.749200   21245 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:08:08.752286   21245 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 00:08:08.752321   21245 node_conditions.go:123] node cpu capacity is 8
	I0717 00:08:08.752338   21245 node_conditions.go:105] duration metric: took 3.132147ms to run NodePressure ...
	I0717 00:08:08.752352   21245 start.go:241] waiting for startup goroutines ...
	I0717 00:08:08.752362   21245 start.go:246] waiting for cluster config update ...
	I0717 00:08:08.752385   21245 start.go:255] writing updated cluster config ...
	I0717 00:08:08.752754   21245 ssh_runner.go:195] Run: rm -f paused
	I0717 00:08:08.799014   21245 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:08:08.801132   21245 out.go:177] * Done! kubectl is now configured to use "addons-957510" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.081237233Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-8jqfn from CNI network \"kindnet\" (type=ptp)"
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.121510584Z" level=info msg="Stopped pod sandbox: 181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af" id=f01d3d98-58a2-4e22-9dc2-098a3102aadb name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.388298429Z" level=info msg="Removing container: a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed" id=bec18559-4b28-4b57-863e-387b7dbaef23 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:11:07 addons-957510 crio[1030]: time="2024-07-17 00:11:07.402361980Z" level=info msg="Removed container a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed: ingress-nginx/ingress-nginx-controller-768f948f8f-8jqfn/controller" id=bec18559-4b28-4b57-863e-387b7dbaef23 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.421847728Z" level=info msg="Removing container: b4543f1979531c950c5365d2b08de163c46bd23458e3d926c4b9a7c1d0941e5b" id=e882c364-1717-4091-8b59-0d89b60c9784 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.435155132Z" level=info msg="Removed container b4543f1979531c950c5365d2b08de163c46bd23458e3d926c4b9a7c1d0941e5b: ingress-nginx/ingress-nginx-admission-patch-pzr2p/patch" id=e882c364-1717-4091-8b59-0d89b60c9784 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.436437993Z" level=info msg="Removing container: d985f0edcf7e9fbbe6cc276058ee74b225b4001c93084152f770047782f1f345" id=6260802f-bbb3-45b8-9c95-f97130d2b638 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.449668256Z" level=info msg="Removed container d985f0edcf7e9fbbe6cc276058ee74b225b4001c93084152f770047782f1f345: ingress-nginx/ingress-nginx-admission-create-x7qqn/create" id=6260802f-bbb3-45b8-9c95-f97130d2b638 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.450969546Z" level=info msg="Stopping pod sandbox: 181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af" id=6e23f67f-a1f7-46ce-8359-2f4b6127f811 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.451016990Z" level=info msg="Stopped pod sandbox (already stopped): 181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af" id=6e23f67f-a1f7-46ce-8359-2f4b6127f811 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.451291986Z" level=info msg="Removing pod sandbox: 181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af" id=0f589f8f-8865-4a90-bd6d-c891511daf16 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.457823690Z" level=info msg="Removed pod sandbox: 181c29bb34853eabfdfc031e02b406a46f7b3a55a62f98c6e73f735ce49423af" id=0f589f8f-8865-4a90-bd6d-c891511daf16 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.458248757Z" level=info msg="Stopping pod sandbox: 0a9e19a37905aada4a4bb078c4b7305f41125766f49776779e5ebcf4875114cc" id=ee74e538-f231-4bbc-bb68-189a45b90936 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.458282310Z" level=info msg="Stopped pod sandbox (already stopped): 0a9e19a37905aada4a4bb078c4b7305f41125766f49776779e5ebcf4875114cc" id=ee74e538-f231-4bbc-bb68-189a45b90936 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.458541822Z" level=info msg="Removing pod sandbox: 0a9e19a37905aada4a4bb078c4b7305f41125766f49776779e5ebcf4875114cc" id=a1abb1a3-66ff-458b-bb31-d5e83ee1ba78 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.464290075Z" level=info msg="Removed pod sandbox: 0a9e19a37905aada4a4bb078c4b7305f41125766f49776779e5ebcf4875114cc" id=a1abb1a3-66ff-458b-bb31-d5e83ee1ba78 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.464727481Z" level=info msg="Stopping pod sandbox: 01035fc60c9fcf9ff600c5cba28c2aa41573aab6ba73c12a01ba583c5af4198b" id=cdb91c12-71e6-434c-b090-828ea258e9f0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.464762009Z" level=info msg="Stopped pod sandbox (already stopped): 01035fc60c9fcf9ff600c5cba28c2aa41573aab6ba73c12a01ba583c5af4198b" id=cdb91c12-71e6-434c-b090-828ea258e9f0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.465072800Z" level=info msg="Removing pod sandbox: 01035fc60c9fcf9ff600c5cba28c2aa41573aab6ba73c12a01ba583c5af4198b" id=a759d296-eb77-4897-b260-091865799e7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.471574802Z" level=info msg="Removed pod sandbox: 01035fc60c9fcf9ff600c5cba28c2aa41573aab6ba73c12a01ba583c5af4198b" id=a759d296-eb77-4897-b260-091865799e7e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.471982627Z" level=info msg="Stopping pod sandbox: 81904c042f5f5f37c901163892eef1fe3f14df1dbc935d8f8fa170eb20f1298e" id=3978f44d-7a49-44c0-926c-c2a53dc5e485 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.472027176Z" level=info msg="Stopped pod sandbox (already stopped): 81904c042f5f5f37c901163892eef1fe3f14df1dbc935d8f8fa170eb20f1298e" id=3978f44d-7a49-44c0-926c-c2a53dc5e485 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.472336864Z" level=info msg="Removing pod sandbox: 81904c042f5f5f37c901163892eef1fe3f14df1dbc935d8f8fa170eb20f1298e" id=5aabfff9-2c7b-41a3-afd9-6102ccc6ef9c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:11:25 addons-957510 crio[1030]: time="2024-07-17 00:11:25.478793286Z" level=info msg="Removed pod sandbox: 81904c042f5f5f37c901163892eef1fe3f14df1dbc935d8f8fa170eb20f1298e" id=5aabfff9-2c7b-41a3-afd9-6102ccc6ef9c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 17 00:13:37 addons-957510 crio[1030]: time="2024-07-17 00:13:37.614570573Z" level=info msg="Stopping container: cadab9d57975e238315c7f62d156d1146d2714298d9aab2c6b20cc2a8f1cdde2 (timeout: 30s)" id=1ab704f4-f201-4574-8781-f7f40690487d name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dab5ee9dc6842       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   33372cce1f7af       hello-world-app-6778b5fc9f-gt5lx
	603cc2d5f3003       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         4 minutes ago       Running             nginx                     0                   2e57fa18ce3b6       nginx
	6dbdb17793c74       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   6448fcef75de5       headlamp-7867546754-gqqd4
	07383d40a8a7b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   dbcf0316840ce       gcp-auth-5db96cd9b4-qp6rr
	cc99503030cd9       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   8349a2c01eee4       yakd-dashboard-799879c74f-7m6rj
	cadab9d57975e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   32a0eadbc44dd       metrics-server-c59844bb4-6hgp6
	c393bc759d0e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   14f6496da41e2       storage-provisioner
	c567976e9c07b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   60ae80776905c       coredns-7db6d8ff4d-5wj8z
	0ceadb4c6599e       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115                      7 minutes ago       Running             kindnet-cni               0                   6bfd66e9c1ea4       kindnet-t5p77
	11425c4b5b25a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                        7 minutes ago       Running             kube-proxy                0                   ea91018cad8a3       kube-proxy-bvcbh
	2295ef488b3d8       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                        8 minutes ago       Running             kube-scheduler            0                   3548dc8ed9025       kube-scheduler-addons-957510
	71595cce63070       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                        8 minutes ago       Running             kube-controller-manager   0                   89cd5b6944ecb       kube-controller-manager-addons-957510
	fe7b23d958f97       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   76b2554bb6499       etcd-addons-957510
	81a854553dec6       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                        8 minutes ago       Running             kube-apiserver            0                   d3468b84e0ee4       kube-apiserver-addons-957510
	
	
	==> coredns [c567976e9c07b3e1bc4d1780acecb53b74e5b81e47e969c90b3a32dc323ad19a] <==
	[INFO] 10.244.0.9:44353 - 45254 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099001s
	[INFO] 10.244.0.9:58287 - 23596 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004682751s
	[INFO] 10.244.0.9:58287 - 34608 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005348433s
	[INFO] 10.244.0.9:41381 - 10914 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005373308s
	[INFO] 10.244.0.9:41381 - 7325 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.023407916s
	[INFO] 10.244.0.9:45442 - 35194 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006324323s
	[INFO] 10.244.0.9:45442 - 20551 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006458581s
	[INFO] 10.244.0.9:43865 - 874 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066375s
	[INFO] 10.244.0.9:43865 - 31343 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091279s
	[INFO] 10.244.0.20:51353 - 49497 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000196523s
	[INFO] 10.244.0.20:35332 - 13258 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000163174s
	[INFO] 10.244.0.20:43550 - 33118 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107325s
	[INFO] 10.244.0.20:38046 - 39500 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114093s
	[INFO] 10.244.0.20:36745 - 45526 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124052s
	[INFO] 10.244.0.20:55234 - 10191 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170774s
	[INFO] 10.244.0.20:52931 - 39191 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005902509s
	[INFO] 10.244.0.20:36858 - 59651 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007520362s
	[INFO] 10.244.0.20:48035 - 24244 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006161377s
	[INFO] 10.244.0.20:59410 - 60807 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007408859s
	[INFO] 10.244.0.20:53557 - 38929 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005649553s
	[INFO] 10.244.0.20:46788 - 4828 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00775879s
	[INFO] 10.244.0.20:44742 - 37096 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000923812s
	[INFO] 10.244.0.20:45602 - 45687 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00110212s
	[INFO] 10.244.0.26:55101 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000226076s
	[INFO] 10.244.0.26:40134 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147343s
	
	
	==> describe nodes <==
	Name:               addons-957510
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-957510
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6910ff1293b7338a320c1c51aaf2fcee1cf8a91
	                    minikube.k8s.io/name=addons-957510
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_05_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-957510
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:05:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-957510
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:13:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:11:33 +0000   Wed, 17 Jul 2024 00:05:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:11:33 +0000   Wed, 17 Jul 2024 00:05:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:11:33 +0000   Wed, 17 Jul 2024 00:05:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:11:33 +0000   Wed, 17 Jul 2024 00:05:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-957510
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859328Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859328Ki
	  pods:               110
	System Info:
	  Machine ID:                 657e3d13bd5d4ac4bc838c3d4cd57cc8
	  System UUID:                a3a08e87-d85e-4f7e-bd87-33ecbd5c47c7
	  Boot ID:                    3bd8d3e2-5698-4d65-8304-5a0a45a28197
	  Kernel Version:             5.15.0-1062-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-gt5lx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  gcp-auth                    gcp-auth-5db96cd9b4-qp6rr                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  headlamp                    headlamp-7867546754-gqqd4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 coredns-7db6d8ff4d-5wj8z                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     7m58s
	  kube-system                 etcd-addons-957510                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m13s
	  kube-system                 kindnet-t5p77                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m58s
	  kube-system                 kube-apiserver-addons-957510             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-controller-manager-addons-957510    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-proxy-bvcbh                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-scheduler-addons-957510             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 metrics-server-c59844bb4-6hgp6           100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m53s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  yakd-dashboard              yakd-dashboard-799879c74f-7m6rj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             548Mi (1%!)(MISSING)  476Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node addons-957510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node addons-957510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s (x8 over 8m19s)  kubelet          Node addons-957510 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m13s                  kubelet          Node addons-957510 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m13s                  kubelet          Node addons-957510 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m13s                  kubelet          Node addons-957510 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m59s                  node-controller  Node addons-957510 event: Registered Node addons-957510 in Controller
	  Normal  NodeReady                7m39s                  kubelet          Node addons-957510 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000702] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000678] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000675] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000634] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.648033] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.057399] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.006733] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015635] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002903] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.015150] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.938511] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 00:08] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000014] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[  +1.007400] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[  +2.015849] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[  +4.191720] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[Jul17 00:09] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[ +16.126848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	[ +32.509623] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 8a fc ee 47 6e 6d 3e 3c 72 fa 2b db 08 00
	
	
	==> etcd [fe7b23d958f9737238b5bc12f3b75e4db2dce8717b8eb3808432fda89e2b37ba] <==
	{"level":"info","ts":"2024-07-17T00:05:43.738988Z","caller":"traceutil/trace.go:171","msg":"trace[835334306] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"195.922364ms","start":"2024-07-17T00:05:43.543031Z","end":"2024-07-17T00:05:43.738953Z","steps":["trace[835334306] 'process raft request'  (duration: 94.450867ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.739505Z","caller":"traceutil/trace.go:171","msg":"trace[237447508] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"196.683009ms","start":"2024-07-17T00:05:43.542743Z","end":"2024-07-17T00:05:43.739426Z","steps":["trace[237447508] 'process raft request'  (duration: 78.807255ms)","trace[237447508] 'compare'  (duration: 15.631884ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.821325Z","caller":"traceutil/trace.go:171","msg":"trace[327262368] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"277.812357ms","start":"2024-07-17T00:05:43.54318Z","end":"2024-07-17T00:05:43.820992Z","steps":["trace[327262368] 'process raft request'  (duration: 94.44477ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.824141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.34824ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128030573248648663 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95\" mod_revision:427 > success:<request_put:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95\" value_size:2367 >> failure:<request_range:<key:\"/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T00:05:43.825057Z","caller":"traceutil/trace.go:171","msg":"trace[1907663100] transaction","detail":"{read_only:false; number_of_response:1; response_revision:434; }","duration":"281.692283ms","start":"2024-07-17T00:05:43.543341Z","end":"2024-07-17T00:05:43.825033Z","steps":["trace[1907663100] 'process raft request'  (duration: 94.340421ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.831455Z","caller":"traceutil/trace.go:171","msg":"trace[318979519] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"288.022028ms","start":"2024-07-17T00:05:43.543389Z","end":"2024-07-17T00:05:43.831411Z","steps":["trace[318979519] 'process raft request'  (duration: 94.427699ms)","trace[318979519] 'store kv pair into bolt db' {req_type:put; key:/registry/pods/default/cloud-spanner-emulator-6fcd4f6f98-bkp95; req_size:2434; } (duration: 184.089855ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.84236Z","caller":"traceutil/trace.go:171","msg":"trace[1434870633] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"103.429161ms","start":"2024-07-17T00:05:43.738914Z","end":"2024-07-17T00:05:43.842343Z","steps":["trace[1434870633] 'process raft request'  (duration: 103.40138ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842686Z","caller":"traceutil/trace.go:171","msg":"trace[454952717] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"299.007266ms","start":"2024-07-17T00:05:43.543661Z","end":"2024-07-17T00:05:43.842668Z","steps":["trace[454952717] 'process raft request'  (duration: 281.365191ms)","trace[454952717] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/deployments/kube-system/coredns; req_size:4078; } (duration: 10.323103ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.842817Z","caller":"traceutil/trace.go:171","msg":"trace[555935544] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"119.221251ms","start":"2024-07-17T00:05:43.723587Z","end":"2024-07-17T00:05:43.842809Z","steps":["trace[555935544] 'process raft request'  (duration: 118.666651ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842834Z","caller":"traceutil/trace.go:171","msg":"trace[1039293278] linearizableReadLoop","detail":"{readStateIndex:448; appliedIndex:442; }","duration":"299.013709ms","start":"2024-07-17T00:05:43.54379Z","end":"2024-07-17T00:05:43.842804Z","steps":["trace[1039293278] 'read index received'  (duration: 77.767144ms)","trace[1039293278] 'applied index is now lower than readState.Index'  (duration: 221.245747ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:43.842891Z","caller":"traceutil/trace.go:171","msg":"trace[997092808] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"208.827109ms","start":"2024-07-17T00:05:43.634057Z","end":"2024-07-17T00:05:43.842884Z","steps":["trace[997092808] 'process raft request'  (duration: 208.107466ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842968Z","caller":"traceutil/trace.go:171","msg":"trace[774825181] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"104.562529ms","start":"2024-07-17T00:05:43.73839Z","end":"2024-07-17T00:05:43.842953Z","steps":["trace[774825181] 'process raft request'  (duration: 103.898216ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:43.842999Z","caller":"traceutil/trace.go:171","msg":"trace[687420027] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"208.572081ms","start":"2024-07-17T00:05:43.634421Z","end":"2024-07-17T00:05:43.842993Z","steps":["trace[687420027] 'process raft request'  (duration: 207.797377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.843099Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.296486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-957510\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2024-07-17T00:05:43.843116Z","caller":"traceutil/trace.go:171","msg":"trace[696636387] range","detail":"{range_begin:/registry/minions/addons-957510; range_end:; response_count:1; response_revision:441; }","duration":"299.340331ms","start":"2024-07-17T00:05:43.54377Z","end":"2024-07-17T00:05:43.843111Z","steps":["trace[696636387] 'agreement among raft nodes before linearized reading'  (duration: 299.29242ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.934045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.077285ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:05:43.934103Z","caller":"traceutil/trace.go:171","msg":"trace[1245650603] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:442; }","duration":"299.19366ms","start":"2024-07-17T00:05:43.634897Z","end":"2024-07-17T00:05:43.934091Z","steps":["trace[1245650603] 'agreement among raft nodes before linearized reading'  (duration: 299.084082ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.934348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.541294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T00:05:43.934377Z","caller":"traceutil/trace.go:171","msg":"trace[1544716083] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:442; }","duration":"195.590938ms","start":"2024-07-17T00:05:43.738777Z","end":"2024-07-17T00:05:43.934368Z","steps":["trace[1544716083] 'agreement among raft nodes before linearized reading'  (duration: 195.545137ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:05:43.934491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.293363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-957510\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2024-07-17T00:05:43.934511Z","caller":"traceutil/trace.go:171","msg":"trace[998651653] range","detail":"{range_begin:/registry/minions/addons-957510; range_end:; response_count:1; response_revision:442; }","duration":"196.341559ms","start":"2024-07-17T00:05:43.738163Z","end":"2024-07-17T00:05:43.934505Z","steps":["trace[998651653] 'agreement among raft nodes before linearized reading'  (duration: 196.297152ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:44.125374Z","caller":"traceutil/trace.go:171","msg":"trace[1245597813] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"204.790646ms","start":"2024-07-17T00:05:43.920562Z","end":"2024-07-17T00:05:44.125352Z","steps":["trace[1245597813] 'process raft request'  (duration: 121.692323ms)","trace[1245597813] 'compare'  (duration: 82.542472ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:05:44.12553Z","caller":"traceutil/trace.go:171","msg":"trace[1227819910] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"196.243609ms","start":"2024-07-17T00:05:43.929278Z","end":"2024-07-17T00:05:44.125521Z","steps":["trace[1227819910] 'process raft request'  (duration: 195.607359ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:05:44.125736Z","caller":"traceutil/trace.go:171","msg":"trace[2108594412] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"196.297426ms","start":"2024-07-17T00:05:43.92943Z","end":"2024-07-17T00:05:44.125727Z","steps":["trace[2108594412] 'process raft request'  (duration: 195.490113ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:06:38.068874Z","caller":"traceutil/trace.go:171","msg":"trace[1933927538] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"111.538215ms","start":"2024-07-17T00:06:37.957314Z","end":"2024-07-17T00:06:38.068853Z","steps":["trace[1933927538] 'process raft request'  (duration: 111.358606ms)"],"step_count":1}
	
	
	==> gcp-auth [07383d40a8a7b73b7ea3ccd6187d01bf085eb24804678e4e29a79f346314be38] <==
	2024/07/17 00:06:41 GCP Auth Webhook started!
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:09 Ready to marshal response ...
	2024/07/17 00:08:09 Ready to write response ...
	2024/07/17 00:08:18 Ready to marshal response ...
	2024/07/17 00:08:18 Ready to write response ...
	2024/07/17 00:08:19 Ready to marshal response ...
	2024/07/17 00:08:19 Ready to write response ...
	2024/07/17 00:08:27 Ready to marshal response ...
	2024/07/17 00:08:27 Ready to write response ...
	2024/07/17 00:08:36 Ready to marshal response ...
	2024/07/17 00:08:36 Ready to write response ...
	2024/07/17 00:08:38 Ready to marshal response ...
	2024/07/17 00:08:38 Ready to write response ...
	2024/07/17 00:08:54 Ready to marshal response ...
	2024/07/17 00:08:54 Ready to write response ...
	2024/07/17 00:11:02 Ready to marshal response ...
	2024/07/17 00:11:02 Ready to write response ...
	
	
	==> kernel <==
	 00:13:38 up 56 min,  0 users,  load average: 0.16, 0.30, 0.23
	Linux addons-957510 5.15.0-1062-gcp #70~20.04.1-Ubuntu SMP Fri May 24 20:12:18 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0ceadb4c6599efa2a7f73690f742eaafa931cebca7b9f7a6a1782a5b0ab92aa5] <==
	I0717 00:12:29.020671       1 main.go:303] handling current node
	W0717 00:12:32.051111       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:12:32.051143       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:12:32.492467       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:12:32.492499       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:12:34.068933       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:12:34.068981       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0717 00:12:39.020970       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:39.021013       1 main.go:303] handling current node
	I0717 00:12:49.021048       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:49.021091       1 main.go:303] handling current node
	I0717 00:12:59.021040       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:12:59.021075       1 main.go:303] handling current node
	W0717 00:13:05.260704       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0717 00:13:05.260748       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0717 00:13:08.730530       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:13:08.730569       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 00:13:09.021388       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:13:09.021430       1 main.go:303] handling current node
	I0717 00:13:19.020732       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:13:19.020775       1 main.go:303] handling current node
	W0717 00:13:28.903214       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:13:28.903256       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0717 00:13:29.020658       1 main.go:299] Handling node with IPs: map[192.168.49.2:{}]
	I0717 00:13:29.020689       1 main.go:303] handling current node
	
	
	==> kube-apiserver [81a854553dec62157dbe24681409c5854a285fce7a238ab22bb310fe277366ef] <==
	I0717 00:08:09.552827       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.182.225"}
	E0717 00:08:19.690623       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:08:19.696093       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:08:19.701484       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0717 00:08:30.115918       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.27:37296: read: connection reset by peer
	I0717 00:08:33.198954       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 00:08:34.216294       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0717 00:08:34.703200       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 00:08:38.675852       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 00:08:39.128551       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.224.235"}
	I0717 00:08:50.051692       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0717 00:09:10.424727       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.424789       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.438908       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.439035       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.441821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.441857       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.450806       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.450848       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:09:10.462314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:09:10.462349       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 00:09:11.442633       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:09:11.462497       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:09:11.470528       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 00:11:02.250843       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.232.111"}
	
	
	==> kube-controller-manager [71595cce63070edff1044c4d61fa90a73c33b80b00f3be92329affb66654f3d6] <==
	W0717 00:11:34.202776       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:34.202820       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:34.309348       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:34.309378       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:11:46.089938       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:11:46.089988       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:20.237970       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:20.238009       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:22.137487       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:22.137524       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:23.860423       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:23.860456       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:12:46.034264       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:12:46.034311       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:13:01.474716       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:13:01.474755       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:13:16.672134       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:13:16.672167       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:13:17.682580       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:13:17.682614       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:13:33.397797       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:13:33.397838       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:13:37.222671       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:13:37.222708       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 00:13:37.604486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="6.467µs"
	
	
	==> kube-proxy [11425c4b5b25ab625cfa1c2db236d73da3c2644e444029f7915eddd4a6e1b57b] <==
	I0717 00:05:43.346605       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:05:43.921652       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 00:05:44.631108       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 00:05:44.631234       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:05:44.638126       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 00:05:44.638211       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 00:05:44.638244       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:05:44.639060       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:05:44.639159       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:05:44.640478       1 config.go:192] "Starting service config controller"
	I0717 00:05:44.641310       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:05:44.720087       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:05:44.731825       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:05:44.731720       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:05:44.731911       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:05:44.720145       1 config.go:319] "Starting node config controller"
	I0717 00:05:44.732028       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:05:44.732035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2295ef488b3d8d545aa4bd8c7cdf0a1faf6fc320ab3de46d91ae21f3bc4d05bd] <==
	W0717 00:05:22.739208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:05:22.739225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:05:22.739267       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:22.739285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:23.564384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:05:23.564419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:05:23.611579       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:23.611609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:23.747501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:05:23.747545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:05:23.758961       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:05:23.759002       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:05:23.794976       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:05:23.795008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:05:23.831092       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:05:23.831129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:05:23.866467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:05:23.866498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:05:23.936478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:05:23.936514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:05:23.939500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:05:23.939533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:05:24.015737       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:05:24.015780       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:05:26.137423       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:11:03 addons-957510 kubelet[1742]: E0717 00:11:03.391209    1742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a\": container with ID starting with 13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a not found: ID does not exist" containerID="13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a"
	Jul 17 00:11:03 addons-957510 kubelet[1742]: I0717 00:11:03.391244    1742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a"} err="failed to get container status \"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a\": rpc error: code = NotFound desc = could not find container \"13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a\": container with ID starting with 13ed533e6dd223770dca9a7fffc6afe545477d32ecb36207300d559ea35acb9a not found: ID does not exist"
	Jul 17 00:11:04 addons-957510 kubelet[1742]: I0717 00:11:04.388328    1742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-gt5lx" podStartSLOduration=0.672930016 podStartE2EDuration="2.388307869s" podCreationTimestamp="2024-07-17 00:11:02 +0000 UTC" firstStartedPulling="2024-07-17 00:11:02.451068172 +0000 UTC m=+337.449218358" lastFinishedPulling="2024-07-17 00:11:04.166446031 +0000 UTC m=+339.164596211" observedRunningTime="2024-07-17 00:11:04.388243632 +0000 UTC m=+339.386393828" watchObservedRunningTime="2024-07-17 00:11:04.388307869 +0000 UTC m=+339.386458065"
	Jul 17 00:11:05 addons-957510 kubelet[1742]: I0717 00:11:05.075904    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090a43c6-4407-4947-833f-5dd55a5864b6" path="/var/lib/kubelet/pods/090a43c6-4407-4947-833f-5dd55a5864b6/volumes"
	Jul 17 00:11:05 addons-957510 kubelet[1742]: I0717 00:11:05.076387    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1c5890-c2f9-4c82-aa6c-8895839fcb19" path="/var/lib/kubelet/pods/5e1c5890-c2f9-4c82-aa6c-8895839fcb19/volumes"
	Jul 17 00:11:05 addons-957510 kubelet[1742]: I0717 00:11:05.076761    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0202ebc-2668-47ee-baec-18ca041823e8" path="/var/lib/kubelet/pods/d0202ebc-2668-47ee-baec-18ca041823e8/volumes"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.257183    1742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05cf5423-75de-4998-b8b7-63cc9447eb68-webhook-cert\") pod \"05cf5423-75de-4998-b8b7-63cc9447eb68\" (UID: \"05cf5423-75de-4998-b8b7-63cc9447eb68\") "
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.257235    1742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74tvr\" (UniqueName: \"kubernetes.io/projected/05cf5423-75de-4998-b8b7-63cc9447eb68-kube-api-access-74tvr\") pod \"05cf5423-75de-4998-b8b7-63cc9447eb68\" (UID: \"05cf5423-75de-4998-b8b7-63cc9447eb68\") "
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.259065    1742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05cf5423-75de-4998-b8b7-63cc9447eb68-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "05cf5423-75de-4998-b8b7-63cc9447eb68" (UID: "05cf5423-75de-4998-b8b7-63cc9447eb68"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.259083    1742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cf5423-75de-4998-b8b7-63cc9447eb68-kube-api-access-74tvr" (OuterVolumeSpecName: "kube-api-access-74tvr") pod "05cf5423-75de-4998-b8b7-63cc9447eb68" (UID: "05cf5423-75de-4998-b8b7-63cc9447eb68"). InnerVolumeSpecName "kube-api-access-74tvr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.358406    1742 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-74tvr\" (UniqueName: \"kubernetes.io/projected/05cf5423-75de-4998-b8b7-63cc9447eb68-kube-api-access-74tvr\") on node \"addons-957510\" DevicePath \"\""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.358443    1742 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05cf5423-75de-4998-b8b7-63cc9447eb68-webhook-cert\") on node \"addons-957510\" DevicePath \"\""
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.387239    1742 scope.go:117] "RemoveContainer" containerID="a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.402646    1742 scope.go:117] "RemoveContainer" containerID="a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: E0717 00:11:07.403040    1742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed\": container with ID starting with a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed not found: ID does not exist" containerID="a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"
	Jul 17 00:11:07 addons-957510 kubelet[1742]: I0717 00:11:07.403080    1742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed"} err="failed to get container status \"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed\": rpc error: code = NotFound desc = could not find container \"a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed\": container with ID starting with a881415601f376a5f67465bfef22e2ddec4aa29f38bf92012788f37f4787b8ed not found: ID does not exist"
	Jul 17 00:11:09 addons-957510 kubelet[1742]: I0717 00:11:09.074967    1742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cf5423-75de-4998-b8b7-63cc9447eb68" path="/var/lib/kubelet/pods/05cf5423-75de-4998-b8b7-63cc9447eb68/volumes"
	Jul 17 00:11:25 addons-957510 kubelet[1742]: I0717 00:11:25.420789    1742 scope.go:117] "RemoveContainer" containerID="b4543f1979531c950c5365d2b08de163c46bd23458e3d926c4b9a7c1d0941e5b"
	Jul 17 00:11:25 addons-957510 kubelet[1742]: I0717 00:11:25.435416    1742 scope.go:117] "RemoveContainer" containerID="d985f0edcf7e9fbbe6cc276058ee74b225b4001c93084152f770047782f1f345"
	Jul 17 00:13:38 addons-957510 kubelet[1742]: I0717 00:13:38.939813    1742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40f452f3-f225-4b33-88fc-6a0362123620-tmp-dir\") pod \"40f452f3-f225-4b33-88fc-6a0362123620\" (UID: \"40f452f3-f225-4b33-88fc-6a0362123620\") "
	Jul 17 00:13:38 addons-957510 kubelet[1742]: I0717 00:13:38.939891    1742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmnc9\" (UniqueName: \"kubernetes.io/projected/40f452f3-f225-4b33-88fc-6a0362123620-kube-api-access-jmnc9\") pod \"40f452f3-f225-4b33-88fc-6a0362123620\" (UID: \"40f452f3-f225-4b33-88fc-6a0362123620\") "
	Jul 17 00:13:38 addons-957510 kubelet[1742]: I0717 00:13:38.940239    1742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f452f3-f225-4b33-88fc-6a0362123620-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "40f452f3-f225-4b33-88fc-6a0362123620" (UID: "40f452f3-f225-4b33-88fc-6a0362123620"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 17 00:13:38 addons-957510 kubelet[1742]: I0717 00:13:38.941671    1742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f452f3-f225-4b33-88fc-6a0362123620-kube-api-access-jmnc9" (OuterVolumeSpecName: "kube-api-access-jmnc9") pod "40f452f3-f225-4b33-88fc-6a0362123620" (UID: "40f452f3-f225-4b33-88fc-6a0362123620"). InnerVolumeSpecName "kube-api-access-jmnc9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:13:39 addons-957510 kubelet[1742]: I0717 00:13:39.040132    1742 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/40f452f3-f225-4b33-88fc-6a0362123620-tmp-dir\") on node \"addons-957510\" DevicePath \"\""
	Jul 17 00:13:39 addons-957510 kubelet[1742]: I0717 00:13:39.040164    1742 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jmnc9\" (UniqueName: \"kubernetes.io/projected/40f452f3-f225-4b33-88fc-6a0362123620-kube-api-access-jmnc9\") on node \"addons-957510\" DevicePath \"\""
	
	
	==> storage-provisioner [c393bc759d0e49a66ab4b193930997dd02bc7ce39cd86df897d0f5d1f06f8e65] <==
	I0717 00:05:59.936859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:05:59.944799       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:05:59.944849       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:05:59.951018       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:05:59.951129       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-957510_89e5a102-4f78-4267-b904-5270b75f732d!
	I0717 00:05:59.951130       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"306f74b4-2ed0-42d7-aa75-f318a87d8dcc", APIVersion:"v1", ResourceVersion:"933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-957510_89e5a102-4f78-4267-b904-5270b75f732d became leader
	I0717 00:06:00.051335       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-957510_89e5a102-4f78-4267-b904-5270b75f732d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-957510 -n addons-957510
helpers_test.go:261: (dbg) Run:  kubectl --context addons-957510 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-6hgp6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-957510 describe pod metrics-server-c59844bb4-6hgp6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-957510 describe pod metrics-server-c59844bb4-6hgp6: exit status 1 (65.243551ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-6hgp6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-957510 describe pod metrics-server-c59844bb4-6hgp6: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (307.19s)

                                                
                                    

Test pass (306/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.01
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 5.53
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.06
18 TestDownloadOnly/v1.30.2/DeleteAll 0.2
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 10.52
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.2
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
29 TestDownloadOnlyKic 1.07
30 TestBinaryMirror 0.71
31 TestOffline 64.46
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 198.84
38 TestAddons/parallel/Registry 14.87
40 TestAddons/parallel/InspektorGadget 11.67
42 TestAddons/parallel/HelmTiller 9.95
44 TestAddons/parallel/CSI 46.92
45 TestAddons/parallel/Headlamp 13.79
46 TestAddons/parallel/CloudSpanner 5.48
47 TestAddons/parallel/LocalPath 52.97
48 TestAddons/parallel/NvidiaDevicePlugin 6.44
49 TestAddons/parallel/Yakd 6.01
53 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestAddons/StoppedEnableDisable 12.07
55 TestCertOptions 26.55
56 TestCertExpiration 221.21
58 TestForceSystemdFlag 28.24
59 TestForceSystemdEnv 29.58
61 TestKVMDriverInstallOrUpdate 6.71
65 TestErrorSpam/setup 21.5
66 TestErrorSpam/start 0.56
67 TestErrorSpam/status 0.82
68 TestErrorSpam/pause 1.47
69 TestErrorSpam/unpause 1.43
70 TestErrorSpam/stop 1.34
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 52.16
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 27.76
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.74
82 TestFunctional/serial/CacheCmd/cache/add_local 1.07
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
87 TestFunctional/serial/CacheCmd/cache/delete 0.1
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 37.98
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.31
93 TestFunctional/serial/LogsFileCmd 1.35
94 TestFunctional/serial/InvalidService 4.09
96 TestFunctional/parallel/ConfigCmd 0.37
97 TestFunctional/parallel/DashboardCmd 10.32
98 TestFunctional/parallel/DryRun 0.36
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.91
104 TestFunctional/parallel/ServiceCmdConnect 9.52
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 38.77
108 TestFunctional/parallel/SSHCmd 0.6
109 TestFunctional/parallel/CpCmd 1.81
110 TestFunctional/parallel/MySQL 22.65
111 TestFunctional/parallel/FileSync 0.27
112 TestFunctional/parallel/CertSync 1.56
116 TestFunctional/parallel/NodeLabels 0.1
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
120 TestFunctional/parallel/License 0.16
121 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.25
127 TestFunctional/parallel/ServiceCmd/List 0.53
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
130 TestFunctional/parallel/ServiceCmd/Format 0.33
131 TestFunctional/parallel/ServiceCmd/URL 0.34
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/Version/short 0.05
139 TestFunctional/parallel/Version/components 0.62
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.85
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.6
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.07
145 TestFunctional/parallel/ImageCommands/Setup 0.98
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
147 TestFunctional/parallel/ProfileCmd/profile_list 0.37
148 TestFunctional/parallel/MountCmd/any-port 5.96
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.3
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
151 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.57
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.81
160 TestFunctional/parallel/MountCmd/specific-port 1.7
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 112.07
169 TestMultiControlPlane/serial/DeployApp 7.16
170 TestMultiControlPlane/serial/PingHostFromPods 1.01
171 TestMultiControlPlane/serial/AddWorkerNode 35.97
172 TestMultiControlPlane/serial/NodeLabels 0.07
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.62
174 TestMultiControlPlane/serial/CopyFile 15.35
175 TestMultiControlPlane/serial/StopSecondaryNode 12.46
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
177 TestMultiControlPlane/serial/RestartSecondaryNode 29.38
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.7
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 142.35
180 TestMultiControlPlane/serial/DeleteSecondaryNode 11.76
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
182 TestMultiControlPlane/serial/StopCluster 35.45
183 TestMultiControlPlane/serial/RestartCluster 61.05
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
185 TestMultiControlPlane/serial/AddSecondaryNode 45.54
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.62
190 TestJSONOutput/start/Command 49.47
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.67
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.6
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.68
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.2
215 TestKicCustomNetwork/create_custom_network 28.53
216 TestKicCustomNetwork/use_default_bridge_network 26.78
217 TestKicExistingNetwork 25.64
218 TestKicCustomSubnet 24.42
219 TestKicStaticIP 24.62
220 TestMainNoArgs 0.04
221 TestMinikubeProfile 49.6
224 TestMountStart/serial/StartWithMountFirst 5.55
225 TestMountStart/serial/VerifyMountFirst 0.24
226 TestMountStart/serial/StartWithMountSecond 5.55
227 TestMountStart/serial/VerifyMountSecond 0.24
228 TestMountStart/serial/DeleteFirst 1.6
229 TestMountStart/serial/VerifyMountPostDelete 0.24
230 TestMountStart/serial/Stop 1.17
231 TestMountStart/serial/RestartStopped 7.13
232 TestMountStart/serial/VerifyMountPostStop 0.23
235 TestMultiNode/serial/FreshStart2Nodes 81.91
236 TestMultiNode/serial/DeployApp2Nodes 3.29
237 TestMultiNode/serial/PingHostFrom2Pods 0.72
238 TestMultiNode/serial/AddNode 31.24
239 TestMultiNode/serial/MultiNodeLabels 0.06
240 TestMultiNode/serial/ProfileList 0.29
241 TestMultiNode/serial/CopyFile 8.82
242 TestMultiNode/serial/StopNode 2.08
243 TestMultiNode/serial/StartAfterStop 8.87
244 TestMultiNode/serial/RestartKeepsNodes 89.33
245 TestMultiNode/serial/DeleteNode 4.95
246 TestMultiNode/serial/StopMultiNode 23.65
247 TestMultiNode/serial/RestartMultiNode 54.03
248 TestMultiNode/serial/ValidateNameConflict 26.02
253 TestPreload 122.34
255 TestScheduledStopUnix 100
258 TestInsufficientStorage 10.19
259 TestRunningBinaryUpgrade 82.23
261 TestKubernetesUpgrade 358.09
262 TestMissingContainerUpgrade 160.81
263 TestStoppedBinaryUpgrade/Setup 0.49
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
266 TestNoKubernetes/serial/StartWithK8s 35.5
267 TestStoppedBinaryUpgrade/Upgrade 91.28
268 TestNoKubernetes/serial/StartWithStopK8s 11.72
269 TestNoKubernetes/serial/Start 5.22
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
271 TestNoKubernetes/serial/ProfileList 1.58
272 TestNoKubernetes/serial/Stop 1.24
273 TestNoKubernetes/serial/StartNoArgs 6.56
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
283 TestPause/serial/Start 59.31
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
285 TestPause/serial/SecondStartNoReconfiguration 32.16
286 TestPause/serial/Pause 0.89
287 TestPause/serial/VerifyStatus 0.33
288 TestPause/serial/Unpause 0.72
289 TestPause/serial/PauseAgain 1.17
290 TestPause/serial/DeletePaused 3.04
294 TestPause/serial/VerifyDeletedResources 16.25
299 TestNetworkPlugins/group/false 3.14
304 TestStartStop/group/old-k8s-version/serial/FirstStart 139.97
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.88
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.86
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 261.94
312 TestStartStop/group/old-k8s-version/serial/DeployApp 7.39
313 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
314 TestStartStop/group/old-k8s-version/serial/Stop 11.83
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
316 TestStartStop/group/old-k8s-version/serial/SecondStart 133.41
318 TestStartStop/group/no-preload/serial/FirstStart 63.84
320 TestStartStop/group/newest-cni/serial/FirstStart 29.2
321 TestStartStop/group/no-preload/serial/DeployApp 7.26
322 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.85
323 TestStartStop/group/no-preload/serial/Stop 11.93
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
325 TestStartStop/group/no-preload/serial/SecondStart 262.63
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
328 TestStartStop/group/newest-cni/serial/Stop 1.22
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
330 TestStartStop/group/newest-cni/serial/SecondStart 13.74
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
334 TestStartStop/group/newest-cni/serial/Pause 3.22
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/embed-certs/serial/FirstStart 56.94
338 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
340 TestStartStop/group/old-k8s-version/serial/Pause 3
341 TestNetworkPlugins/group/auto/Start 51.64
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.66
346 TestStartStop/group/embed-certs/serial/DeployApp 8.32
347 TestNetworkPlugins/group/kindnet/Start 54.27
348 TestNetworkPlugins/group/auto/KubeletFlags 0.27
349 TestNetworkPlugins/group/auto/NetCatPod 12.2
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.13
351 TestStartStop/group/embed-certs/serial/Stop 12.23
352 TestNetworkPlugins/group/auto/DNS 0.12
353 TestNetworkPlugins/group/auto/Localhost 0.11
354 TestNetworkPlugins/group/auto/HairPin 0.11
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
356 TestStartStop/group/embed-certs/serial/SecondStart 263.25
357 TestNetworkPlugins/group/calico/Start 60.24
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
360 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
361 TestNetworkPlugins/group/kindnet/DNS 0.14
362 TestNetworkPlugins/group/kindnet/Localhost 0.11
363 TestNetworkPlugins/group/kindnet/HairPin 0.12
364 TestNetworkPlugins/group/custom-flannel/Start 62.39
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.26
367 TestNetworkPlugins/group/calico/NetCatPod 10.18
368 TestNetworkPlugins/group/calico/DNS 0.16
369 TestNetworkPlugins/group/calico/Localhost 0.14
370 TestNetworkPlugins/group/calico/HairPin 0.11
371 TestNetworkPlugins/group/enable-default-cni/Start 37.65
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.18
374 TestNetworkPlugins/group/custom-flannel/DNS 0.13
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
380 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
383 TestNetworkPlugins/group/flannel/Start 55.82
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
386 TestStartStop/group/no-preload/serial/Pause 2.97
387 TestNetworkPlugins/group/bridge/Start 76.27
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
390 TestNetworkPlugins/group/flannel/NetCatPod 10.17
391 TestNetworkPlugins/group/flannel/DNS 0.13
392 TestNetworkPlugins/group/flannel/Localhost 0.1
393 TestNetworkPlugins/group/flannel/HairPin 0.1
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
395 TestNetworkPlugins/group/bridge/NetCatPod 10.19
396 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
397 TestNetworkPlugins/group/bridge/DNS 0.15
398 TestNetworkPlugins/group/bridge/Localhost 0.1
399 TestNetworkPlugins/group/bridge/HairPin 0.1
400 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
401 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
402 TestStartStop/group/embed-certs/serial/Pause 2.71
x
+
TestDownloadOnly/v1.20.0/json-events (7.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-659232 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-659232 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.010414017s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-659232
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-659232: exit status 85 (56.484404ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-659232 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |          |
	|         | -p download-only-659232        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:23.602109   19495 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:23.602379   19495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:23.602390   19495 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:23.602394   19495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:23.602554   19495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	W0717 00:04:23.602686   19495 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19265-12715/.minikube/config/config.json: open /home/jenkins/minikube-integration/19265-12715/.minikube/config/config.json: no such file or directory
	I0717 00:04:23.603268   19495 out.go:298] Setting JSON to true
	I0717 00:04:23.604230   19495 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2811,"bootTime":1721171853,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:23.604300   19495 start.go:139] virtualization: kvm guest
	I0717 00:04:23.607115   19495 out.go:97] [download-only-659232] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0717 00:04:23.607267   19495 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 00:04:23.607348   19495 notify.go:220] Checking for updates...
	I0717 00:04:23.608926   19495 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:04:23.610442   19495 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:23.611998   19495 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:04:23.613460   19495 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:04:23.615009   19495 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:04:23.617793   19495 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:04:23.618029   19495 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:23.640642   19495 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:04:23.640771   19495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:24.015486   19495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2024-07-17 00:04:24.005712115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:24.015591   19495 docker.go:307] overlay module found
	I0717 00:04:24.017717   19495 out.go:97] Using the docker driver based on user configuration
	I0717 00:04:24.017756   19495 start.go:297] selected driver: docker
	I0717 00:04:24.017764   19495 start.go:901] validating driver "docker" against <nil>
	I0717 00:04:24.017849   19495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:24.070939   19495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2024-07-17 00:04:24.061719377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:24.071100   19495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:24.071580   19495 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0717 00:04:24.071720   19495 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:04:24.073879   19495 out.go:169] Using Docker driver with root privileges
	I0717 00:04:24.075409   19495 cni.go:84] Creating CNI manager for ""
	I0717 00:04:24.075432   19495 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:04:24.075442   19495 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:24.075541   19495 start.go:340] cluster config:
	{Name:download-only-659232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-659232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:24.077191   19495 out.go:97] Starting "download-only-659232" primary control-plane node in "download-only-659232" cluster
	I0717 00:04:24.077221   19495 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:04:24.078921   19495 out.go:97] Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:04:24.078962   19495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:04:24.079011   19495 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:04:24.095640   19495 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:04:24.095902   19495 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:04:24.096046   19495 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:04:24.108142   19495 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:24.108168   19495 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:24.108298   19495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 00:04:24.110389   19495 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 00:04:24.110410   19495 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:24.138773   19495 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:27.152049   19495 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	
	
	* The control-plane node download-only-659232 host does not exist
	  To start a cluster, run: "minikube start -p download-only-659232"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-659232
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (5.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-110186 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-110186 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.529443671s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (5.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-110186
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-110186: exit status 85 (60.126123ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-659232 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-659232        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-659232        | download-only-659232 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | -o=json --download-only        | download-only-110186 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-110186        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:30.993921   19855 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:30.994027   19855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:30.994036   19855 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:30.994041   19855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:30.994228   19855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:04:30.994756   19855 out.go:298] Setting JSON to true
	I0717 00:04:30.995541   19855 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2818,"bootTime":1721171853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:30.995599   19855 start.go:139] virtualization: kvm guest
	I0717 00:04:30.997471   19855 out.go:97] [download-only-110186] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:30.997636   19855 notify.go:220] Checking for updates...
	I0717 00:04:30.999083   19855 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:04:31.000714   19855 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:31.001997   19855 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:04:31.003390   19855 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:04:31.005040   19855 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:04:31.008051   19855 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:04:31.008331   19855 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:31.030160   19855 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:04:31.030298   19855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:31.078025   19855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-07-17 00:04:31.069012006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:31.078142   19855 docker.go:307] overlay module found
	I0717 00:04:31.080107   19855 out.go:97] Using the docker driver based on user configuration
	I0717 00:04:31.080143   19855 start.go:297] selected driver: docker
	I0717 00:04:31.080161   19855 start.go:901] validating driver "docker" against <nil>
	I0717 00:04:31.080252   19855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:31.128366   19855 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-07-17 00:04:31.118764035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:31.128581   19855 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:31.129070   19855 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0717 00:04:31.129232   19855 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:04:31.131256   19855 out.go:169] Using Docker driver with root privileges
	I0717 00:04:31.132807   19855 cni.go:84] Creating CNI manager for ""
	I0717 00:04:31.132832   19855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:04:31.132846   19855 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:31.132921   19855 start.go:340] cluster config:
	{Name:download-only-110186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-110186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:31.134613   19855 out.go:97] Starting "download-only-110186" primary control-plane node in "download-only-110186" cluster
	I0717 00:04:31.134643   19855 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:04:31.136290   19855 out.go:97] Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:04:31.136324   19855 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:31.136379   19855 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:04:31.154807   19855 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:04:31.154945   19855 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:04:31.154963   19855 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:04:31.154967   19855 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:04:31.154977   19855 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:04:31.169974   19855 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:31.170008   19855 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:31.170172   19855 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:31.172148   19855 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 00:04:31.172171   19855 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:31.202730   19855 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:34.953647   19855 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:34.953750   19855 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:35.708395   19855 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 00:04:35.708709   19855 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/download-only-110186/config.json ...
	I0717 00:04:35.708737   19855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/download-only-110186/config.json: {Name:mkb1eb7e344ab505ca8761617acb3b8c4cc0aba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:04:35.708910   19855 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 00:04:35.709037   19855 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19265-12715/.minikube/cache/linux/amd64/v1.30.2/kubectl
	
	
	* The control-plane node download-only-110186 host does not exist
	  To start a cluster, run: "minikube start -p download-only-110186"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-110186
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (10.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-874175 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-874175 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.514989165s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (10.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-874175
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-874175: exit status 85 (60.494831ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-659232 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-659232             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-659232             | download-only-659232 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | -o=json --download-only             | download-only-110186 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-110186             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| delete  | -p download-only-110186             | download-only-110186 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC | 17 Jul 24 00:04 UTC |
	| start   | -o=json --download-only             | download-only-874175 | jenkins | v1.33.1 | 17 Jul 24 00:04 UTC |                     |
	|         | -p download-only-874175             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:04:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:04:36.909487   20200 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:04:36.909767   20200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:36.909777   20200 out.go:304] Setting ErrFile to fd 2...
	I0717 00:04:36.909781   20200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:04:36.909976   20200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:04:36.910518   20200 out.go:298] Setting JSON to true
	I0717 00:04:36.911437   20200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2824,"bootTime":1721171853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:04:36.911499   20200 start.go:139] virtualization: kvm guest
	I0717 00:04:36.913846   20200 out.go:97] [download-only-874175] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:04:36.914034   20200 notify.go:220] Checking for updates...
	I0717 00:04:36.915690   20200 out.go:169] MINIKUBE_LOCATION=19265
	I0717 00:04:36.917206   20200 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:04:36.918886   20200 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:04:36.920427   20200 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:04:36.921903   20200 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 00:04:36.924613   20200 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:04:36.924943   20200 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:04:36.947927   20200 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:04:36.948051   20200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:36.992036   20200 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:04:36.983098488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:36.992154   20200 docker.go:307] overlay module found
	I0717 00:04:36.994176   20200 out.go:97] Using the docker driver based on user configuration
	I0717 00:04:36.994204   20200 start.go:297] selected driver: docker
	I0717 00:04:36.994216   20200 start.go:901] validating driver "docker" against <nil>
	I0717 00:04:36.994293   20200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:04:37.041671   20200 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-17 00:04:37.033171598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:04:37.041818   20200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:04:37.042261   20200 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0717 00:04:37.042391   20200 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:04:37.044450   20200 out.go:169] Using Docker driver with root privileges
	I0717 00:04:37.045882   20200 cni.go:84] Creating CNI manager for ""
	I0717 00:04:37.045897   20200 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 00:04:37.045905   20200 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 00:04:37.045973   20200 start.go:340] cluster config:
	{Name:download-only-874175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-874175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:04:37.047326   20200 out.go:97] Starting "download-only-874175" primary control-plane node in "download-only-874175" cluster
	I0717 00:04:37.047344   20200 cache.go:121] Beginning downloading kic base image for docker with crio
	I0717 00:04:37.048691   20200 out.go:97] Pulling base image v0.0.44-1721064868-19249 ...
	I0717 00:04:37.048713   20200 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:04:37.048810   20200 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local docker daemon
	I0717 00:04:37.064154   20200 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c to local cache
	I0717 00:04:37.064293   20200 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory
	I0717 00:04:37.064309   20200 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c in local cache directory, skipping pull
	I0717 00:04:37.064313   20200 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c exists in cache, skipping pull
	I0717 00:04:37.064323   20200 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c as a tarball
	I0717 00:04:37.082578   20200 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:37.082602   20200 cache.go:56] Caching tarball of preloaded images
	I0717 00:04:37.082741   20200 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:04:37.084816   20200 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 00:04:37.084838   20200 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:37.116203   20200 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 00:04:43.650197   20200 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:43.650294   20200 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19265-12715/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 00:04:44.397638   20200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 00:04:44.397977   20200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/download-only-874175/config.json ...
	I0717 00:04:44.398008   20200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/download-only-874175/config.json: {Name:mk1f23132122ff493e79a475a7cd779e00d7d967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:04:44.398169   20200 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 00:04:44.398319   20200 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19265-12715/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-874175 host does not exist
	  To start a cluster, run: "minikube start -p download-only-874175"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-874175
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-079405 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-079405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-079405
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-733705 --alsologtostderr --binary-mirror http://127.0.0.1:36213 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-733705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-733705
--- PASS: TestBinaryMirror (0.71s)

                                                
                                    
x
+
TestOffline (64.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-855094 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-855094 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m2.162488304s)
helpers_test.go:175: Cleaning up "offline-crio-855094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-855094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-855094: (2.299442629s)
--- PASS: TestOffline (64.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-957510
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-957510: exit status 85 (54.354607ms)

                                                
                                                
-- stdout --
	* Profile "addons-957510" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-957510"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-957510
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-957510: exit status 85 (56.348424ms)

                                                
                                                
-- stdout --
	* Profile "addons-957510" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-957510"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (198.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-957510 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-957510 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m18.840760213s)
--- PASS: TestAddons/Setup (198.84s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 10.289247ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-stqvk" [ab363c33-d118-4417-9ebe-8caaebc1efff] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00544408s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nqrkw" [23e004ae-eb71-4040-bb09-9a393ed5044a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004257837s
addons_test.go:342: (dbg) Run:  kubectl --context addons-957510 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-957510 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-957510 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.132938895s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 ip
2024/07/17 00:08:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xj5vl" [d85e1c0b-a502-4c06-9689-f147488e3567] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003534115s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-957510
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-957510: (5.664154437s)
--- PASS: TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.95s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.904476ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-qmhpn" [dd11389b-b3d6-4f2a-b725-9f58dcbc7c1c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004554403s
addons_test.go:475: (dbg) Run:  kubectl --context addons-957510 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-957510 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.462896787s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.877993ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-957510 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-957510 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f5725157-a1ac-4136-96cc-564c5d4dcbc5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f5725157-a1ac-4136-96cc-564c5d4dcbc5] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.002971584s
addons_test.go:586: (dbg) Run:  kubectl --context addons-957510 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-957510 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-957510 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-957510 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-957510 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-957510 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-957510 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6ca55058-6930-474d-99a5-a3ae3694fc15] Pending
helpers_test.go:344: "task-pv-pod-restore" [6ca55058-6930-474d-99a5-a3ae3694fc15] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6ca55058-6930-474d-99a5-a3ae3694fc15] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002848818s
addons_test.go:628: (dbg) Run:  kubectl --context addons-957510 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-957510 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-957510 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-957510 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.574585625s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-957510 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-gqqd4" [8ce44524-8ae8-43a4-aaf8-28ea4d75bf15] Pending
helpers_test.go:344: "headlamp-7867546754-gqqd4" [8ce44524-8ae8-43a4-aaf8-28ea4d75bf15] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-gqqd4" [8ce44524-8ae8-43a4-aaf8-28ea4d75bf15] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.002935633s
--- PASS: TestAddons/parallel/Headlamp (13.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-bkp95" [230e762e-18b6-4861-a85c-eec1d0299802] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003338212s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-957510
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.97s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-957510 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-957510 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-957510 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [19e79e1f-df45-40f4-ac48-dedf26fb3f35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [19e79e1f-df45-40f4-ac48-dedf26fb3f35] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [19e79e1f-df45-40f4-ac48-dedf26fb3f35] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002802294s
addons_test.go:992: (dbg) Run:  kubectl --context addons-957510 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 ssh "cat /opt/local-path-provisioner/pvc-7a2029d1-4210-4ea3-8f80-a2f46d6b3dac_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-957510 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-957510 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-957510 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-957510 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.019900633s)
--- PASS: TestAddons/parallel/LocalPath (52.97s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vxl6w" [62fe154c-efaa-413e-90ec-020e5c5db0b7] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003998867s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-957510
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-7m6rj" [b7abf2fb-941f-422f-9186-34f61e2d557f] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004041234s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-957510 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-957510 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-957510
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-957510: (11.842641299s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-957510
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-957510
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-957510
--- PASS: TestAddons/StoppedEnableDisable (12.07s)

                                                
                                    
x
+
TestCertOptions (26.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-018483 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-018483 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.834150233s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-018483 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-018483 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-018483 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-018483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-018483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-018483: (2.130719845s)
--- PASS: TestCertOptions (26.55s)

                                                
                                    
x
+
TestCertExpiration (221.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623376 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623376 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.223799672s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-623376 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0717 00:46:55.236200   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-623376 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (13.738287003s)
helpers_test.go:175: Cleaning up "cert-expiration-623376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-623376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-623376: (2.243394966s)
--- PASS: TestCertExpiration (221.21s)

                                                
                                    
x
+
TestForceSystemdFlag (28.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-410960 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-410960 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.529973766s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-410960 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-410960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-410960
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-410960: (2.436732243s)
--- PASS: TestForceSystemdFlag (28.24s)

                                                
                                    
x
+
TestForceSystemdEnv (29.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-103026 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-103026 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.202902445s)
helpers_test.go:175: Cleaning up "force-systemd-env-103026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-103026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-103026: (2.374336724s)
--- PASS: TestForceSystemdEnv (29.58s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.71s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.71s)

                                                
                                    
x
+
TestErrorSpam/setup (21.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-916151 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-916151 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-916151 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-916151 --driver=docker  --container-runtime=crio: (21.500966403s)
--- PASS: TestErrorSpam/setup (21.50s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 stop: (1.165129782s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-916151 --log_dir /tmp/nospam-916151 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19265-12715/.minikube/files/etc/test/nested/copy/19483/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567309 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-567309 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (52.158081167s)
--- PASS: TestFunctional/serial/StartWithProxy (52.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567309 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-567309 --alsologtostderr -v=8: (27.756288849s)
functional_test.go:659: soft start took 27.757017112s for "functional-567309" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-567309 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-567309 /tmp/TestFunctionalserialCacheCmdcacheadd_local401575751/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cache add minikube-local-cache-test:functional-567309
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cache delete minikube-local-cache-test:functional-567309
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-567309
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (260.058414ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 kubectl -- --context functional-567309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-567309 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-567309 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.980813106s)
functional_test.go:757: restart took 37.98094315s for "functional-567309" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-567309 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-567309 logs: (1.312196968s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 logs --file /tmp/TestFunctionalserialLogsFileCmd2993289712/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-567309 logs --file /tmp/TestFunctionalserialLogsFileCmd2993289712/001/logs.txt: (1.34816522s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-567309 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-567309
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-567309: exit status 115 (312.027413ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32516 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-567309 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 config get cpus: exit status 14 (91.734385ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 config get cpus: exit status 14 (47.900057ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-567309 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-567309 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 60570: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-567309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (148.734506ms)

                                                
                                                
-- stdout --
	* [functional-567309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:17:09.461727   59804 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:17:09.461832   59804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:17:09.461836   59804 out.go:304] Setting ErrFile to fd 2...
	I0717 00:17:09.461840   59804 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:17:09.462034   59804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:17:09.462536   59804 out.go:298] Setting JSON to false
	I0717 00:17:09.463645   59804 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3576,"bootTime":1721171853,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:17:09.463711   59804 start.go:139] virtualization: kvm guest
	I0717 00:17:09.466182   59804 out.go:177] * [functional-567309] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:17:09.468115   59804 notify.go:220] Checking for updates...
	I0717 00:17:09.468130   59804 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:17:09.469994   59804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:17:09.471631   59804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:17:09.473262   59804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:17:09.474710   59804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:17:09.476226   59804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:17:09.477916   59804 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:17:09.478415   59804 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:17:09.504186   59804 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:17:09.504383   59804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:17:09.555553   59804 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-17 00:17:09.545194406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:17:09.555647   59804 docker.go:307] overlay module found
	I0717 00:17:09.557438   59804 out.go:177] * Using the docker driver based on existing profile
	I0717 00:17:09.558652   59804 start.go:297] selected driver: docker
	I0717 00:17:09.558676   59804 start.go:901] validating driver "docker" against &{Name:functional-567309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-567309 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:17:09.558776   59804 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:17:09.561314   59804 out.go:177] 
	W0717 00:17:09.562657   59804 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 00:17:09.563953   59804 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567309 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-567309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-567309 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (144.173581ms)

                                                
                                                
-- stdout --
	* [functional-567309] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:17:09.818639   60072 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:17:09.818743   60072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:17:09.818753   60072 out.go:304] Setting ErrFile to fd 2...
	I0717 00:17:09.818757   60072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:17:09.819045   60072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:17:09.819553   60072 out.go:298] Setting JSON to false
	I0717 00:17:09.820610   60072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3577,"bootTime":1721171853,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:17:09.820682   60072 start.go:139] virtualization: kvm guest
	I0717 00:17:09.822816   60072 out.go:177] * [functional-567309] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0717 00:17:09.824223   60072 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:17:09.824300   60072 notify.go:220] Checking for updates...
	I0717 00:17:09.826909   60072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:17:09.828203   60072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:17:09.829610   60072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:17:09.830936   60072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:17:09.832188   60072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:17:09.833735   60072 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:17:09.834186   60072 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:17:09.857458   60072 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:17:09.857615   60072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:17:09.908133   60072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2024-07-17 00:17:09.899083486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:17:09.908248   60072 docker.go:307] overlay module found
	I0717 00:17:09.910105   60072 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 00:17:09.911405   60072 start.go:297] selected driver: docker
	I0717 00:17:09.911426   60072 start.go:901] validating driver "docker" against &{Name:functional-567309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721064868-19249@sha256:f2789f25c9e51cdeb9cef760e15dc838ef08abd5bb1913311c1eabedda231e8c Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-567309 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:17:09.911504   60072 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:17:09.913739   60072 out.go:177] 
	W0717 00:17:09.915059   60072 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 00:17:09.916252   60072 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-567309 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-567309 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-cgqcl" [c7a297a3-e24a-4be8-abb3-677cf0be99fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-cgqcl" [c7a297a3-e24a-4be8-abb3-677cf0be99fa] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003758655s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31072
functional_test.go:1671: http://192.168.49.2:31072: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-cgqcl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31072
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a05e7f3e-2754-47ab-8fdb-d88f280ce64b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004321075s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-567309 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-567309 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-567309 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-567309 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [198c2096-8599-48cf-9bbd-a03718b5a4bc] Pending
helpers_test.go:344: "sp-pod" [198c2096-8599-48cf-9bbd-a03718b5a4bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [198c2096-8599-48cf-9bbd-a03718b5a4bc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006176282s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-567309 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-567309 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-567309 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8827de4c-b01b-4cce-9558-65e0666109b3] Pending
helpers_test.go:344: "sp-pod" [8827de4c-b01b-4cce-9558-65e0666109b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8827de4c-b01b-4cce-9558-65e0666109b3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003567891s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-567309 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh -n functional-567309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cp functional-567309:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3771336590/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh -n functional-567309 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh -n functional-567309 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-567309 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-cdbl4" [fc437315-b5e3-4894-b8e5-692b4c7e725c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-cdbl4" [fc437315-b5e3-4894-b8e5-692b4c7e725c] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003594764s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-567309 exec mysql-64454c8b5c-cdbl4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-567309 exec mysql-64454c8b5c-cdbl4 -- mysql -ppassword -e "show databases;": exit status 1 (104.524293ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-567309 exec mysql-64454c8b5c-cdbl4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/19483/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo cat /etc/test/nested/copy/19483/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/19483.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo cat /etc/ssl/certs/19483.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/19483.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo cat /usr/share/ca-certificates/19483.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/194832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo cat /etc/ssl/certs/194832.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/194832.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo cat /usr/share/ca-certificates/194832.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-567309 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh "sudo systemctl is-active docker": exit status 1 (256.849506ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh "sudo systemctl is-active containerd": exit status 1 (246.967259ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-567309 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-567309 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-vjl8z" [5bd715c7-5388-4ff9-9bb3-c8c786a828b7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-vjl8z" [5bd715c7-5388-4ff9-9bb3-c8c786a828b7] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004015412s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-567309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-567309 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-567309 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-567309 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 55839: os: process already finished
helpers_test.go:502: unable to terminate pid 55498: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-567309 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-567309 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cd18ac0f-bcaa-4fe2-ab58-0d919dd8c620] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cd18ac0f-bcaa-4fe2-ab58-0d919dd8c620] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004053256s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 service list -o json
functional_test.go:1490: Took "489.839985ms" to run "out/minikube-linux-amd64 -p functional-567309 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32442
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32442
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-567309 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.133.109 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-567309 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567309 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kicbase/echo-server:functional-567309
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567309 image ls --format short --alsologtostderr:
I0717 00:17:17.656500   63176 out.go:291] Setting OutFile to fd 1 ...
I0717 00:17:17.658907   63176 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:17.658928   63176 out.go:304] Setting ErrFile to fd 2...
I0717 00:17:17.658935   63176 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:17.659379   63176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
I0717 00:17:17.660498   63176 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:17.660617   63176 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:17.661013   63176 cli_runner.go:164] Run: docker container inspect functional-567309 --format={{.State.Status}}
I0717 00:17:17.680289   63176 ssh_runner.go:195] Run: systemctl --version
I0717 00:17:17.680338   63176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-567309
I0717 00:17:17.700880   63176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/functional-567309/id_rsa Username:docker}
I0717 00:17:17.824861   63176 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567309 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-567309  | 8acabb45bf756 | 1.47MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| docker.io/kicbase/echo-server           | functional-567309  | 9056ab77afb8e | 4.94MB |
| docker.io/library/nginx                 | alpine             | 099a2d701db1f | 45.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567309 image ls --format table --alsologtostderr:
I0717 00:17:20.889021   63716 out.go:291] Setting OutFile to fd 1 ...
I0717 00:17:20.889244   63716 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:20.889252   63716 out.go:304] Setting ErrFile to fd 2...
I0717 00:17:20.889256   63716 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:20.889437   63716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
I0717 00:17:20.889978   63716 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:20.890072   63716 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:20.890479   63716 cli_runner.go:164] Run: docker container inspect functional-567309 --format={{.State.Status}}
I0717 00:17:20.907846   63716 ssh_runner.go:195] Run: systemctl --version
I0717 00:17:20.907942   63716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-567309
I0717 00:17:20.931202   63716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/functional-567309/id_rsa Username:docker}
I0717 00:17:21.120702   63716 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567309 image ls --format json --alsologtostderr:
[{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"},{"id":"e6f1816883
972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests
":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"d2298daf1090ac31e8b406ea2c3ad522aa4ebead4bdbade376123dce5d1b0ee0","repoDigests":["docker.io/library/d91e5d14165bb8c237d5db4a4e743fd6556f125947bffc56ab789f788297adf8-tmp@sha256:c9e4e6f5914580fd395ed0766754630bbbd9170ce000f4c7229313c727115e96"],"repoTags":[],"size":"1465611"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55","docker.io/library/nginx@sha256:d0540253e168c1c4a6ec65d259aadc293efa9b35ad9bf8575a81fa414f79e0c6"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45068814"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf51
83192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b7
4c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-567309"],"size":"4943877"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","do
cker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febd
aef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"85953433"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567309 image ls --format json --alsologtostderr:
I0717 00:17:20.288107   63623 out.go:291] Setting OutFile to fd 1 ...
I0717 00:17:20.288384   63623 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:20.288397   63623 out.go:304] Setting ErrFile to fd 2...
I0717 00:17:20.288403   63623 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:20.288617   63623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
I0717 00:17:20.289270   63623 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:20.289392   63623 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:20.289854   63623 cli_runner.go:164] Run: docker container inspect functional-567309 --format={{.State.Status}}
I0717 00:17:20.307064   63623 ssh_runner.go:195] Run: systemctl --version
I0717 00:17:20.307105   63623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-567309
I0717 00:17:20.333500   63623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/functional-567309/id_rsa Username:docker}
I0717 00:17:20.626384   63623 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567309 image ls --format yaml --alsologtostderr:
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
- docker.io/library/nginx@sha256:d0540253e168c1c4a6ec65d259aadc293efa9b35ad9bf8575a81fa414f79e0c6
repoTags:
- docker.io/library/nginx:alpine
size: "45068814"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-567309
size: "4943877"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567309 image ls --format yaml --alsologtostderr:
I0717 00:17:18.507544   63231 out.go:291] Setting OutFile to fd 1 ...
I0717 00:17:18.507701   63231 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:18.507708   63231 out.go:304] Setting ErrFile to fd 2...
I0717 00:17:18.507716   63231 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:18.508045   63231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
I0717 00:17:18.508841   63231 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:18.508996   63231 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:18.509551   63231 cli_runner.go:164] Run: docker container inspect functional-567309 --format={{.State.Status}}
I0717 00:17:18.528606   63231 ssh_runner.go:195] Run: systemctl --version
I0717 00:17:18.528662   63231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-567309
I0717 00:17:18.546426   63231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/functional-567309/id_rsa Username:docker}
I0717 00:17:18.632262   63231 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh pgrep buildkitd: exit status 1 (237.120527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image build -t localhost/my-image:functional-567309 testdata/build --alsologtostderr
2024/07/17 00:17:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-567309 image build -t localhost/my-image:functional-567309 testdata/build --alsologtostderr: (2.418905469s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-567309 image build -t localhost/my-image:functional-567309 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d2298daf109
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-567309
--> 8acabb45bf7
Successfully tagged localhost/my-image:functional-567309
8acabb45bf756c331e151b4a538e0c720eeb0ca46d63dddac4248ee9ffb2cf03
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-567309 image build -t localhost/my-image:functional-567309 testdata/build --alsologtostderr:
I0717 00:17:18.953409   63425 out.go:291] Setting OutFile to fd 1 ...
I0717 00:17:18.953563   63425 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:18.953573   63425 out.go:304] Setting ErrFile to fd 2...
I0717 00:17:18.953578   63425 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:17:18.953795   63425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
I0717 00:17:18.954385   63425 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:18.954980   63425 config.go:182] Loaded profile config "functional-567309": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 00:17:18.955348   63425 cli_runner.go:164] Run: docker container inspect functional-567309 --format={{.State.Status}}
I0717 00:17:18.973712   63425 ssh_runner.go:195] Run: systemctl --version
I0717 00:17:18.973769   63425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-567309
I0717 00:17:18.992171   63425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/functional-567309/id_rsa Username:docker}
I0717 00:17:19.076110   63425 build_images.go:161] Building image from path: /tmp/build.1294289094.tar
I0717 00:17:19.076177   63425 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 00:17:19.084450   63425 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1294289094.tar
I0717 00:17:19.087828   63425 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1294289094.tar: stat -c "%s %y" /var/lib/minikube/build/build.1294289094.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1294289094.tar': No such file or directory
I0717 00:17:19.087867   63425 ssh_runner.go:362] scp /tmp/build.1294289094.tar --> /var/lib/minikube/build/build.1294289094.tar (3072 bytes)
I0717 00:17:19.110357   63425 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1294289094
I0717 00:17:19.118954   63425 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1294289094 -xf /var/lib/minikube/build/build.1294289094.tar
I0717 00:17:19.127560   63425 crio.go:315] Building image: /var/lib/minikube/build/build.1294289094
I0717 00:17:19.127661   63425 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-567309 /var/lib/minikube/build/build.1294289094 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 00:17:21.234696   63425 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-567309 /var/lib/minikube/build/build.1294289094 --cgroup-manager=cgroupfs: (2.107002012s)
I0717 00:17:21.234779   63425 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1294289094
I0717 00:17:21.245344   63425 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1294289094.tar
I0717 00:17:21.322944   63425 build_images.go:217] Built localhost/my-image:functional-567309 from /tmp/build.1294289094.tar
I0717 00:17:21.322980   63425 build_images.go:133] succeeded building to: functional-567309
I0717 00:17:21.322987   63425 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-567309
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "302.614865ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "65.271376ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdany-port1386525931/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721175427402244252" to /tmp/TestFunctionalparallelMountCmdany-port1386525931/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721175427402244252" to /tmp/TestFunctionalparallelMountCmdany-port1386525931/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721175427402244252" to /tmp/TestFunctionalparallelMountCmdany-port1386525931/001/test-1721175427402244252
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.55962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 00:17 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 00:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 00:17 test-1721175427402244252
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh cat /mount-9p/test-1721175427402244252
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-567309 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c25c2e97-287c-4785-9eaf-1de64e4ec59f] Pending
helpers_test.go:344: "busybox-mount" [c25c2e97-287c-4785-9eaf-1de64e4ec59f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c25c2e97-287c-4785-9eaf-1de64e4ec59f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c25c2e97-287c-4785-9eaf-1de64e4ec59f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004181782s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-567309 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdany-port1386525931/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image load --daemon docker.io/kicbase/echo-server:functional-567309 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-567309 image load --daemon docker.io/kicbase/echo-server:functional-567309 --alsologtostderr: (1.073690481s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "330.437467ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "48.269994ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image load --daemon docker.io/kicbase/echo-server:functional-567309 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-567309
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image load --daemon docker.io/kicbase/echo-server:functional-567309 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image save docker.io/kicbase/echo-server:functional-567309 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image rm docker.io/kicbase/echo-server:functional-567309 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-567309
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 image save --daemon docker.io/kicbase/echo-server:functional-567309 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-567309
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdspecific-port1873488033/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.90329ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdspecific-port1873488033/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh "sudo umount -f /mount-9p": exit status 1 (235.040624ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-567309 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdspecific-port1873488033/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1686497276/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1686497276/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1686497276/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T" /mount1: exit status 1 (326.594061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-567309 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-567309 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1686497276/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1686497276/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-567309 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1686497276/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-567309
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-567309
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-567309
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-605674 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 00:18:08.817774   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:08.823738   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:08.834003   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:08.854322   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:08.894666   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:08.975134   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:09.135568   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:09.456593   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:10.097069   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:11.378147   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:13.938533   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:19.059682   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:29.300076   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:18:49.781132   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:19:30.741860   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-605674 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m51.404454032s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (112.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-605674 -- rollout status deployment/busybox: (5.354449513s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-prvhd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-t7ntr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-xmthm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-prvhd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-t7ntr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-xmthm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-prvhd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-t7ntr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-xmthm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-prvhd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-prvhd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-t7ntr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-t7ntr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-xmthm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-605674 -- exec busybox-fc5497c4f-xmthm -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-605674 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-605674 -v=7 --alsologtostderr: (35.151213992s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-605674 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp testdata/cp-test.txt ha-605674:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile111472431/001/cp-test_ha-605674.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674:/home/docker/cp-test.txt ha-605674-m02:/home/docker/cp-test_ha-605674_ha-605674-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test_ha-605674_ha-605674-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674:/home/docker/cp-test.txt ha-605674-m03:/home/docker/cp-test_ha-605674_ha-605674-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test_ha-605674_ha-605674-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674:/home/docker/cp-test.txt ha-605674-m04:/home/docker/cp-test_ha-605674_ha-605674-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test_ha-605674_ha-605674-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp testdata/cp-test.txt ha-605674-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile111472431/001/cp-test_ha-605674-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m02:/home/docker/cp-test.txt ha-605674:/home/docker/cp-test_ha-605674-m02_ha-605674.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test_ha-605674-m02_ha-605674.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m02:/home/docker/cp-test.txt ha-605674-m03:/home/docker/cp-test_ha-605674-m02_ha-605674-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test_ha-605674-m02_ha-605674-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m02:/home/docker/cp-test.txt ha-605674-m04:/home/docker/cp-test_ha-605674-m02_ha-605674-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test_ha-605674-m02_ha-605674-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp testdata/cp-test.txt ha-605674-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile111472431/001/cp-test_ha-605674-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m03:/home/docker/cp-test.txt ha-605674:/home/docker/cp-test_ha-605674-m03_ha-605674.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test_ha-605674-m03_ha-605674.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m03:/home/docker/cp-test.txt ha-605674-m02:/home/docker/cp-test_ha-605674-m03_ha-605674-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test_ha-605674-m03_ha-605674-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m03:/home/docker/cp-test.txt ha-605674-m04:/home/docker/cp-test_ha-605674-m03_ha-605674-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test_ha-605674-m03_ha-605674-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp testdata/cp-test.txt ha-605674-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile111472431/001/cp-test_ha-605674-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m04:/home/docker/cp-test.txt ha-605674:/home/docker/cp-test_ha-605674-m04_ha-605674.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674 "sudo cat /home/docker/cp-test_ha-605674-m04_ha-605674.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m04:/home/docker/cp-test.txt ha-605674-m02:/home/docker/cp-test_ha-605674-m04_ha-605674-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m02 "sudo cat /home/docker/cp-test_ha-605674-m04_ha-605674-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 cp ha-605674-m04:/home/docker/cp-test.txt ha-605674-m03:/home/docker/cp-test_ha-605674-m04_ha-605674-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 ssh -n ha-605674-m03 "sudo cat /home/docker/cp-test_ha-605674-m04_ha-605674-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-605674 node stop m02 -v=7 --alsologtostderr: (11.821041948s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr: exit status 7 (633.854659ms)

                                                
                                                
-- stdout --
	ha-605674
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-605674-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-605674-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-605674-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:20:45.515319   85873 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:20:45.515582   85873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:20:45.515592   85873 out.go:304] Setting ErrFile to fd 2...
	I0717 00:20:45.515596   85873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:20:45.515766   85873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:20:45.515967   85873 out.go:298] Setting JSON to false
	I0717 00:20:45.515998   85873 mustload.go:65] Loading cluster: ha-605674
	I0717 00:20:45.516114   85873 notify.go:220] Checking for updates...
	I0717 00:20:45.516490   85873 config.go:182] Loaded profile config "ha-605674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:20:45.516510   85873 status.go:255] checking status of ha-605674 ...
	I0717 00:20:45.517110   85873 cli_runner.go:164] Run: docker container inspect ha-605674 --format={{.State.Status}}
	I0717 00:20:45.535017   85873 status.go:330] ha-605674 host status = "Running" (err=<nil>)
	I0717 00:20:45.535044   85873 host.go:66] Checking if "ha-605674" exists ...
	I0717 00:20:45.535268   85873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605674
	I0717 00:20:45.552127   85873 host.go:66] Checking if "ha-605674" exists ...
	I0717 00:20:45.552363   85873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:20:45.552442   85873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605674
	I0717 00:20:45.570659   85873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/ha-605674/id_rsa Username:docker}
	I0717 00:20:45.657067   85873 ssh_runner.go:195] Run: systemctl --version
	I0717 00:20:45.660873   85873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:20:45.671316   85873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:20:45.719670   85873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-07-17 00:20:45.709439009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:20:45.720240   85873 kubeconfig.go:125] found "ha-605674" server: "https://192.168.49.254:8443"
	I0717 00:20:45.720265   85873 api_server.go:166] Checking apiserver status ...
	I0717 00:20:45.720301   85873 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:20:45.730697   85873 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1620/cgroup
	I0717 00:20:45.740753   85873 api_server.go:182] apiserver freezer: "7:freezer:/docker/a23240a5f216979cfafb73be9aa8b3cca25a02a1653237668d1f925794dfd851/crio/crio-ee6daaf7628878f9713f465334f21faef6d03e82c103347c12c7839e0d32cf6e"
	I0717 00:20:45.740812   85873 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a23240a5f216979cfafb73be9aa8b3cca25a02a1653237668d1f925794dfd851/crio/crio-ee6daaf7628878f9713f465334f21faef6d03e82c103347c12c7839e0d32cf6e/freezer.state
	I0717 00:20:45.749431   85873 api_server.go:204] freezer state: "THAWED"
	I0717 00:20:45.749470   85873 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0717 00:20:45.753019   85873 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0717 00:20:45.753039   85873 status.go:422] ha-605674 apiserver status = Running (err=<nil>)
	I0717 00:20:45.753049   85873 status.go:257] ha-605674 status: &{Name:ha-605674 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:20:45.753063   85873 status.go:255] checking status of ha-605674-m02 ...
	I0717 00:20:45.753279   85873 cli_runner.go:164] Run: docker container inspect ha-605674-m02 --format={{.State.Status}}
	I0717 00:20:45.770658   85873 status.go:330] ha-605674-m02 host status = "Stopped" (err=<nil>)
	I0717 00:20:45.770707   85873 status.go:343] host is not running, skipping remaining checks
	I0717 00:20:45.770716   85873 status.go:257] ha-605674-m02 status: &{Name:ha-605674-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:20:45.770740   85873 status.go:255] checking status of ha-605674-m03 ...
	I0717 00:20:45.771019   85873 cli_runner.go:164] Run: docker container inspect ha-605674-m03 --format={{.State.Status}}
	I0717 00:20:45.788018   85873 status.go:330] ha-605674-m03 host status = "Running" (err=<nil>)
	I0717 00:20:45.788059   85873 host.go:66] Checking if "ha-605674-m03" exists ...
	I0717 00:20:45.788404   85873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605674-m03
	I0717 00:20:45.806447   85873 host.go:66] Checking if "ha-605674-m03" exists ...
	I0717 00:20:45.806735   85873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:20:45.806777   85873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605674-m03
	I0717 00:20:45.824504   85873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/ha-605674-m03/id_rsa Username:docker}
	I0717 00:20:45.909017   85873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:20:45.920462   85873 kubeconfig.go:125] found "ha-605674" server: "https://192.168.49.254:8443"
	I0717 00:20:45.920491   85873 api_server.go:166] Checking apiserver status ...
	I0717 00:20:45.920534   85873 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:20:45.930899   85873 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1525/cgroup
	I0717 00:20:45.940553   85873 api_server.go:182] apiserver freezer: "7:freezer:/docker/ebc0c6701cdbc4908257eea1d10cfc0d1f3201f2699528c5012e3560d6a85757/crio/crio-c46ceab7b5e6da7cafe2e1a135b4dde5988c751dbdf765943060f68c51a6be57"
	I0717 00:20:45.940611   85873 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ebc0c6701cdbc4908257eea1d10cfc0d1f3201f2699528c5012e3560d6a85757/crio/crio-c46ceab7b5e6da7cafe2e1a135b4dde5988c751dbdf765943060f68c51a6be57/freezer.state
	I0717 00:20:45.949279   85873 api_server.go:204] freezer state: "THAWED"
	I0717 00:20:45.949311   85873 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0717 00:20:45.953058   85873 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0717 00:20:45.953083   85873 status.go:422] ha-605674-m03 apiserver status = Running (err=<nil>)
	I0717 00:20:45.953091   85873 status.go:257] ha-605674-m03 status: &{Name:ha-605674-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:20:45.953108   85873 status.go:255] checking status of ha-605674-m04 ...
	I0717 00:20:45.953354   85873 cli_runner.go:164] Run: docker container inspect ha-605674-m04 --format={{.State.Status}}
	I0717 00:20:45.971005   85873 status.go:330] ha-605674-m04 host status = "Running" (err=<nil>)
	I0717 00:20:45.971033   85873 host.go:66] Checking if "ha-605674-m04" exists ...
	I0717 00:20:45.971275   85873 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-605674-m04
	I0717 00:20:45.989446   85873 host.go:66] Checking if "ha-605674-m04" exists ...
	I0717 00:20:45.989740   85873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:20:45.989788   85873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-605674-m04
	I0717 00:20:46.008520   85873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/ha-605674-m04/id_rsa Username:docker}
	I0717 00:20:46.096872   85873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:20:46.107086   85873 status.go:257] ha-605674-m04 status: &{Name:ha-605674-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 node start m02 -v=7 --alsologtostderr
E0717 00:20:52.662429   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-605674 node start m02 -v=7 --alsologtostderr: (28.510481919s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (6.702474425s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (142.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-605674 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-605674 -v=7 --alsologtostderr
E0717 00:21:55.235723   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:55.241047   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:55.251367   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:55.271719   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:55.312018   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:55.392395   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:55.552851   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:55.873702   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:56.514704   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:21:57.795228   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-605674 -v=7 --alsologtostderr: (36.620452884s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-605674 --wait=true -v=7 --alsologtostderr
E0717 00:22:00.355947   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:22:05.476567   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:22:15.717534   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:22:36.198634   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:23:08.818064   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
E0717 00:23:17.159789   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:23:36.502615   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-605674 --wait=true -v=7 --alsologtostderr: (1m45.63382978s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-605674
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (142.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-605674 node delete m03 -v=7 --alsologtostderr: (11.006768099s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-605674 stop -v=7 --alsologtostderr: (35.350672772s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr: exit status 7 (97.350982ms)

                                                
                                                
-- stdout --
	ha-605674
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-605674-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-605674-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:24:32.618091  103197 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:24:32.618205  103197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:32.618214  103197 out.go:304] Setting ErrFile to fd 2...
	I0717 00:24:32.618218  103197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:24:32.618408  103197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:24:32.618575  103197 out.go:298] Setting JSON to false
	I0717 00:24:32.618610  103197 mustload.go:65] Loading cluster: ha-605674
	I0717 00:24:32.618730  103197 notify.go:220] Checking for updates...
	I0717 00:24:32.619165  103197 config.go:182] Loaded profile config "ha-605674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:24:32.619188  103197 status.go:255] checking status of ha-605674 ...
	I0717 00:24:32.619674  103197 cli_runner.go:164] Run: docker container inspect ha-605674 --format={{.State.Status}}
	I0717 00:24:32.638222  103197 status.go:330] ha-605674 host status = "Stopped" (err=<nil>)
	I0717 00:24:32.638242  103197 status.go:343] host is not running, skipping remaining checks
	I0717 00:24:32.638248  103197 status.go:257] ha-605674 status: &{Name:ha-605674 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:24:32.638284  103197 status.go:255] checking status of ha-605674-m02 ...
	I0717 00:24:32.638541  103197 cli_runner.go:164] Run: docker container inspect ha-605674-m02 --format={{.State.Status}}
	I0717 00:24:32.655963  103197 status.go:330] ha-605674-m02 host status = "Stopped" (err=<nil>)
	I0717 00:24:32.655985  103197 status.go:343] host is not running, skipping remaining checks
	I0717 00:24:32.655991  103197 status.go:257] ha-605674-m02 status: &{Name:ha-605674-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:24:32.656010  103197 status.go:255] checking status of ha-605674-m04 ...
	I0717 00:24:32.656247  103197 cli_runner.go:164] Run: docker container inspect ha-605674-m04 --format={{.State.Status}}
	I0717 00:24:32.673995  103197 status.go:330] ha-605674-m04 host status = "Stopped" (err=<nil>)
	I0717 00:24:32.674015  103197 status.go:343] host is not running, skipping remaining checks
	I0717 00:24:32.674021  103197 status.go:257] ha-605674-m04 status: &{Name:ha-605674-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-605674 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 00:24:39.080988   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-605674 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m0.049676279s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-605674 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-605674 --control-plane -v=7 --alsologtostderr: (44.729032129s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-605674 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-840194 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0717 00:26:55.235735   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-840194 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.473969055s)
--- PASS: TestJSONOutput/start/Command (49.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-840194 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-840194 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.68s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-840194 --output=json --user=testUser
E0717 00:27:22.923643   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-840194 --output=json --user=testUser: (5.682406172s)
--- PASS: TestJSONOutput/stop/Command (5.68s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-980252 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-980252 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.81412ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"22dacb96-81ce-4a92-bc46-d00fe7fc103e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-980252] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5be51be-fc43-4a72-83d0-1155557fadec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19265"}}
	{"specversion":"1.0","id":"f60f60d6-90b7-4241-a534-0f1a6e43b2db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6edfc720-0f99-4882-bac3-1ac69765a8bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig"}}
	{"specversion":"1.0","id":"2054b1f5-5b46-4e31-af44-02c40e3fef5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube"}}
	{"specversion":"1.0","id":"fc889d8b-625e-4b8f-b970-56d011c5016e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f8e3cd2f-d595-4200-a225-c98997bda546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f2918edf-848b-4a38-b73e-7babbb5dfe32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-980252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-980252
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-339050 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-339050 --network=: (26.52114622s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-339050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-339050
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-339050: (1.992948162s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-905112 --network=bridge
E0717 00:28:08.819868   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-905112 --network=bridge: (24.932533267s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-905112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-905112
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-905112: (1.82932446s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.78s)

                                                
                                    
x
+
TestKicExistingNetwork (25.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-430907 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-430907 --network=existing-network: (23.536258188s)
helpers_test.go:175: Cleaning up "existing-network-430907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-430907
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-430907: (1.947010879s)
--- PASS: TestKicExistingNetwork (25.64s)

                                                
                                    
x
+
TestKicCustomSubnet (24.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-741027 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-741027 --subnet=192.168.60.0/24: (22.335579879s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-741027 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-741027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-741027
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-741027: (2.062863873s)
--- PASS: TestKicCustomSubnet (24.42s)

                                                
                                    
x
+
TestKicStaticIP (24.62s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-017303 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-017303 --static-ip=192.168.200.200: (22.434169414s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-017303 ip
helpers_test.go:175: Cleaning up "static-ip-017303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-017303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-017303: (2.062291256s)
--- PASS: TestKicStaticIP (24.62s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-111548 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-111548 --driver=docker  --container-runtime=crio: (20.626991555s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-115118 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-115118 --driver=docker  --container-runtime=crio: (23.840310179s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-111548
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-115118
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-115118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-115118
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-115118: (1.881854848s)
helpers_test.go:175: Cleaning up "first-111548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-111548
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-111548: (2.209554369s)
--- PASS: TestMinikubeProfile (49.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-161909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-161909 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.553777889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-161909 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-175408 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-175408 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.54800293s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175408 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-161909 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-161909 --alsologtostderr -v=5: (1.598526463s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175408 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-175408
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-175408: (1.169587447s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-175408
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-175408: (6.128405747s)
--- PASS: TestMountStart/serial/RestartStopped (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175408 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756238 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 00:31:55.235619   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756238 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.471116796s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-756238 -- rollout status deployment/busybox: (1.956802864s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-7wsk7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-l5tmh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-7wsk7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-l5tmh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-7wsk7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-l5tmh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.29s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-7wsk7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-7wsk7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-l5tmh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-756238 -- exec busybox-fc5497c4f-l5tmh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756238 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-756238 -v 3 --alsologtostderr: (30.654384961s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-756238 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp testdata/cp-test.txt multinode-756238:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3143910887/001/cp-test_multinode-756238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238:/home/docker/cp-test.txt multinode-756238-m02:/home/docker/cp-test_multinode-756238_multinode-756238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m02 "sudo cat /home/docker/cp-test_multinode-756238_multinode-756238-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238:/home/docker/cp-test.txt multinode-756238-m03:/home/docker/cp-test_multinode-756238_multinode-756238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m03 "sudo cat /home/docker/cp-test_multinode-756238_multinode-756238-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp testdata/cp-test.txt multinode-756238-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3143910887/001/cp-test_multinode-756238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238-m02:/home/docker/cp-test.txt multinode-756238:/home/docker/cp-test_multinode-756238-m02_multinode-756238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238 "sudo cat /home/docker/cp-test_multinode-756238-m02_multinode-756238.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238-m02:/home/docker/cp-test.txt multinode-756238-m03:/home/docker/cp-test_multinode-756238-m02_multinode-756238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m03 "sudo cat /home/docker/cp-test_multinode-756238-m02_multinode-756238-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp testdata/cp-test.txt multinode-756238-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3143910887/001/cp-test_multinode-756238-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238-m03:/home/docker/cp-test.txt multinode-756238:/home/docker/cp-test_multinode-756238-m03_multinode-756238.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238 "sudo cat /home/docker/cp-test_multinode-756238-m03_multinode-756238.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 cp multinode-756238-m03:/home/docker/cp-test.txt multinode-756238-m02:/home/docker/cp-test_multinode-756238-m03_multinode-756238-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 ssh -n multinode-756238-m02 "sudo cat /home/docker/cp-test_multinode-756238-m03_multinode-756238-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-756238 node stop m03: (1.177728043s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756238 status: exit status 7 (450.966962ms)

                                                
                                                
-- stdout --
	multinode-756238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr: exit status 7 (455.553624ms)

                                                
                                                
-- stdout --
	multinode-756238
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-756238-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-756238-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:33:00.543099  169612 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:33:00.543240  169612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:33:00.543252  169612 out.go:304] Setting ErrFile to fd 2...
	I0717 00:33:00.543258  169612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:33:00.543458  169612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:33:00.543651  169612 out.go:298] Setting JSON to false
	I0717 00:33:00.543681  169612 mustload.go:65] Loading cluster: multinode-756238
	I0717 00:33:00.543732  169612 notify.go:220] Checking for updates...
	I0717 00:33:00.544240  169612 config.go:182] Loaded profile config "multinode-756238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:33:00.544262  169612 status.go:255] checking status of multinode-756238 ...
	I0717 00:33:00.544695  169612 cli_runner.go:164] Run: docker container inspect multinode-756238 --format={{.State.Status}}
	I0717 00:33:00.565031  169612 status.go:330] multinode-756238 host status = "Running" (err=<nil>)
	I0717 00:33:00.565056  169612 host.go:66] Checking if "multinode-756238" exists ...
	I0717 00:33:00.565325  169612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-756238
	I0717 00:33:00.582767  169612 host.go:66] Checking if "multinode-756238" exists ...
	I0717 00:33:00.583079  169612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:33:00.583124  169612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-756238
	I0717 00:33:00.600623  169612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/multinode-756238/id_rsa Username:docker}
	I0717 00:33:00.685266  169612 ssh_runner.go:195] Run: systemctl --version
	I0717 00:33:00.689482  169612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:33:00.700453  169612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:33:00.754277  169612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-07-17 00:33:00.744225268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:33:00.754794  169612 kubeconfig.go:125] found "multinode-756238" server: "https://192.168.67.2:8443"
	I0717 00:33:00.754817  169612 api_server.go:166] Checking apiserver status ...
	I0717 00:33:00.754846  169612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:33:00.765198  169612 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1624/cgroup
	I0717 00:33:00.774136  169612 api_server.go:182] apiserver freezer: "7:freezer:/docker/fc64c761aeb8595ce6d9180ce186e57c0f06fc0f4e01fab62d121e437963c07b/crio/crio-6202e1015f25b69969bc8a7e9ffe2b642061b816c03cc8408282741c25f8d7af"
	I0717 00:33:00.774219  169612 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fc64c761aeb8595ce6d9180ce186e57c0f06fc0f4e01fab62d121e437963c07b/crio/crio-6202e1015f25b69969bc8a7e9ffe2b642061b816c03cc8408282741c25f8d7af/freezer.state
	I0717 00:33:00.782258  169612 api_server.go:204] freezer state: "THAWED"
	I0717 00:33:00.782294  169612 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 00:33:00.786284  169612 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 00:33:00.786311  169612 status.go:422] multinode-756238 apiserver status = Running (err=<nil>)
	I0717 00:33:00.786321  169612 status.go:257] multinode-756238 status: &{Name:multinode-756238 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:33:00.786340  169612 status.go:255] checking status of multinode-756238-m02 ...
	I0717 00:33:00.786637  169612 cli_runner.go:164] Run: docker container inspect multinode-756238-m02 --format={{.State.Status}}
	I0717 00:33:00.804512  169612 status.go:330] multinode-756238-m02 host status = "Running" (err=<nil>)
	I0717 00:33:00.804553  169612 host.go:66] Checking if "multinode-756238-m02" exists ...
	I0717 00:33:00.804848  169612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-756238-m02
	I0717 00:33:00.821865  169612 host.go:66] Checking if "multinode-756238-m02" exists ...
	I0717 00:33:00.822181  169612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:33:00.822222  169612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-756238-m02
	I0717 00:33:00.840201  169612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19265-12715/.minikube/machines/multinode-756238-m02/id_rsa Username:docker}
	I0717 00:33:00.924851  169612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:33:00.935693  169612 status.go:257] multinode-756238-m02 status: &{Name:multinode-756238-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:33:00.935733  169612 status.go:255] checking status of multinode-756238-m03 ...
	I0717 00:33:00.936112  169612 cli_runner.go:164] Run: docker container inspect multinode-756238-m03 --format={{.State.Status}}
	I0717 00:33:00.953659  169612 status.go:330] multinode-756238-m03 host status = "Stopped" (err=<nil>)
	I0717 00:33:00.953683  169612 status.go:343] host is not running, skipping remaining checks
	I0717 00:33:00.953689  169612 status.go:257] multinode-756238-m03 status: &{Name:multinode-756238-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 node start m03 -v=7 --alsologtostderr
E0717 00:33:08.818314   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-756238 node start m03 -v=7 --alsologtostderr: (8.226608525s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756238
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-756238
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-756238: (24.671906021s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756238 --wait=true -v=8 --alsologtostderr
E0717 00:34:31.863181   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756238 --wait=true -v=8 --alsologtostderr: (1m4.575910877s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756238
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-756238 node delete m03: (4.388715382s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-756238 stop: (23.501495665s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756238 status: exit status 7 (77.256735ms)

                                                
                                                
-- stdout --
	multinode-756238
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-756238-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr: exit status 7 (75.761406ms)

                                                
                                                
-- stdout --
	multinode-756238
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-756238-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:35:07.735871  178914 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:35:07.735999  178914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:35:07.736007  178914 out.go:304] Setting ErrFile to fd 2...
	I0717 00:35:07.736011  178914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:35:07.736203  178914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:35:07.736348  178914 out.go:298] Setting JSON to false
	I0717 00:35:07.736374  178914 mustload.go:65] Loading cluster: multinode-756238
	I0717 00:35:07.736475  178914 notify.go:220] Checking for updates...
	I0717 00:35:07.736749  178914 config.go:182] Loaded profile config "multinode-756238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:35:07.736763  178914 status.go:255] checking status of multinode-756238 ...
	I0717 00:35:07.737125  178914 cli_runner.go:164] Run: docker container inspect multinode-756238 --format={{.State.Status}}
	I0717 00:35:07.754793  178914 status.go:330] multinode-756238 host status = "Stopped" (err=<nil>)
	I0717 00:35:07.754816  178914 status.go:343] host is not running, skipping remaining checks
	I0717 00:35:07.754824  178914 status.go:257] multinode-756238 status: &{Name:multinode-756238 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 00:35:07.754866  178914 status.go:255] checking status of multinode-756238-m02 ...
	I0717 00:35:07.755185  178914 cli_runner.go:164] Run: docker container inspect multinode-756238-m02 --format={{.State.Status}}
	I0717 00:35:07.771089  178914 status.go:330] multinode-756238-m02 host status = "Stopped" (err=<nil>)
	I0717 00:35:07.771108  178914 status.go:343] host is not running, skipping remaining checks
	I0717 00:35:07.771113  178914 status.go:257] multinode-756238-m02 status: &{Name:multinode-756238-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756238 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756238 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.474702476s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-756238 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-756238
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756238-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-756238-m02 --driver=docker  --container-runtime=crio: exit status 14 (61.548599ms)

                                                
                                                
-- stdout --
	* [multinode-756238-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-756238-m02' is duplicated with machine name 'multinode-756238-m02' in profile 'multinode-756238'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-756238-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-756238-m03 --driver=docker  --container-runtime=crio: (23.816462636s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-756238
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-756238: exit status 80 (258.772905ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-756238 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-756238-m03 already exists in multinode-756238-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-756238-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-756238-m03: (1.837557403s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.02s)

                                                
                                    
x
+
TestPreload (122.34s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-734276 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 00:36:55.235665   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
E0717 00:38:08.817804   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-734276 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.092451222s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-734276 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-734276
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-734276: (5.666450236s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-734276 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0717 00:38:18.286212   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-734276 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.277048249s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-734276 image list
helpers_test.go:175: Cleaning up "test-preload-734276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-734276
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-734276: (2.268422659s)
--- PASS: TestPreload (122.34s)

                                                
                                    
x
+
TestScheduledStopUnix (100s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-330546 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-330546 --memory=2048 --driver=docker  --container-runtime=crio: (24.725399646s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-330546 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-330546 -n scheduled-stop-330546
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-330546 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-330546 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-330546 -n scheduled-stop-330546
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-330546
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-330546 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-330546
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-330546: exit status 7 (61.121867ms)

                                                
                                                
-- stdout --
	scheduled-stop-330546
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-330546 -n scheduled-stop-330546
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-330546 -n scheduled-stop-330546: exit status 7 (61.826949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-330546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-330546
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-330546: (3.998573901s)
--- PASS: TestScheduledStopUnix (100.00s)

                                                
                                    
x
+
TestInsufficientStorage (10.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-133150 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-133150 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.835346852s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"60eaa901-e59c-43a1-a40d-d99d35595e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-133150] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6dfa36c6-2495-46a5-b1ba-f633dfcf39c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19265"}}
	{"specversion":"1.0","id":"1c056981-cf2d-45c0-8913-5ed25f0a28c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f7996a8-3433-4dec-9364-4b2af0175770","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig"}}
	{"specversion":"1.0","id":"d07d4647-f66a-4430-acbe-ea28aab49246","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube"}}
	{"specversion":"1.0","id":"bd5406f9-c064-4d60-b4f1-96b8dee81c31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1e4f8960-e717-4e8b-9149-8e6eeaa5dc09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7e8ca59c-0b9a-473e-a836-73c8973ae767","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f55405fa-d4f1-4c08-a349-af6f226b24a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"38914215-76f4-4835-b302-ab519ff5508f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8d53c8b-b902-47db-a708-6f6fe7f328fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7bd57e30-70e3-41bb-91ca-9eff77386325","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-133150\" primary control-plane node in \"insufficient-storage-133150\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6374a835-979c-4231-b231-9b361ac63496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721064868-19249 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"79615608-82e3-4a8e-b64b-14d5e12787f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"720ae9ad-cfad-4ff7-b2bd-002507cc0335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-133150 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-133150 --output=json --layout=cluster: exit status 7 (252.669518ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-133150","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-133150","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:40:22.144748  201279 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-133150" does not appear in /home/jenkins/minikube-integration/19265-12715/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-133150 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-133150 --output=json --layout=cluster: exit status 7 (254.370291ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-133150","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-133150","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 00:40:22.399700  201377 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-133150" does not appear in /home/jenkins/minikube-integration/19265-12715/kubeconfig
	E0717 00:40:22.409607  201377 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/insufficient-storage-133150/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-133150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-133150
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-133150: (1.848089877s)
--- PASS: TestInsufficientStorage (10.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.255504170 start -p running-upgrade-251031 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.255504170 start -p running-upgrade-251031 --memory=2200 --vm-driver=docker  --container-runtime=crio: (30.792871253s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-251031 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-251031 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.069514927s)
helpers_test.go:175: Cleaning up "running-upgrade-251031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-251031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-251031: (5.589751512s)
--- PASS: TestRunningBinaryUpgrade (82.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.556380769s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-748628
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-748628: (5.145165554s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-748628 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-748628 status --format={{.Host}}: exit status 7 (80.205656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.915516792s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-748628 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (104.956905ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-748628] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-748628
	    minikube start -p kubernetes-upgrade-748628 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7486282 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-748628 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-748628 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.834302158s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-748628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-748628
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-748628: (2.356323857s)
--- PASS: TestKubernetesUpgrade (358.09s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3771791110 start -p missing-upgrade-881589 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3771791110 start -p missing-upgrade-881589 --memory=2200 --driver=docker  --container-runtime=crio: (1m25.937088365s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-881589
E0717 00:41:55.236122   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/functional-567309/client.crt: no such file or directory
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-881589: (12.458059583s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-881589
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-881589 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-881589 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.978673356s)
helpers_test.go:175: Cleaning up "missing-upgrade-881589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-881589
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-881589: (3.876143717s)
--- PASS: TestMissingContainerUpgrade (160.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865292 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-865292 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (70.454518ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-865292] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865292 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865292 --driver=docker  --container-runtime=crio: (35.175040428s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-865292 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3636515737 start -p stopped-upgrade-865277 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3636515737 start -p stopped-upgrade-865277 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m3.17583725s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3636515737 -p stopped-upgrade-865277 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3636515737 -p stopped-upgrade-865277 stop: (2.497452848s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-865277 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-865277 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.61024141s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (91.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865292 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865292 --no-kubernetes --driver=docker  --container-runtime=crio: (9.512885806s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-865292 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-865292 status -o json: exit status 2 (277.364648ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-865292","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-865292
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-865292: (1.934078566s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865292 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865292 --no-kubernetes --driver=docker  --container-runtime=crio: (5.220590046s)
--- PASS: TestNoKubernetes/serial/Start (5.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-865292 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-865292 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.343664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-865292
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-865292: (1.240862864s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-865292 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-865292 --driver=docker  --container-runtime=crio: (6.559886214s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-865292 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-865292 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.020147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/Start (59.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-313188 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-313188 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (59.309730765s)
--- PASS: TestPause/serial/Start (59.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-865277
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-313188 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-313188 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.143103014s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-313188 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-313188 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-313188 --output=json --layout=cluster: exit status 2 (334.658732ms)

                                                
                                                
-- stdout --
	{"Name":"pause-313188","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-313188","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-313188 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-313188 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-313188 --alsologtostderr -v=5: (1.174190504s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.04s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-313188 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-313188 --alsologtostderr -v=5: (3.043826943s)
--- PASS: TestPause/serial/DeletePaused (3.04s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.191823683s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-313188
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-313188: exit status 1 (18.717378ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-313188: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-548380 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-548380 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (162.496539ms)

                                                
                                                
-- stdout --
	* [false-548380] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19265
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 00:43:08.273819  245026 out.go:291] Setting OutFile to fd 1 ...
	I0717 00:43:08.273930  245026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:08.273942  245026 out.go:304] Setting ErrFile to fd 2...
	I0717 00:43:08.273948  245026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:43:08.274161  245026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19265-12715/.minikube/bin
	I0717 00:43:08.274728  245026 out.go:298] Setting JSON to false
	I0717 00:43:08.275908  245026 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5135,"bootTime":1721171853,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 00:43:08.275985  245026 start.go:139] virtualization: kvm guest
	I0717 00:43:08.278595  245026 out.go:177] * [false-548380] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 00:43:08.280220  245026 out.go:177]   - MINIKUBE_LOCATION=19265
	I0717 00:43:08.280260  245026 notify.go:220] Checking for updates...
	I0717 00:43:08.283173  245026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:43:08.285001  245026 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19265-12715/kubeconfig
	I0717 00:43:08.286566  245026 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19265-12715/.minikube
	I0717 00:43:08.288084  245026 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 00:43:08.289521  245026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:43:08.291399  245026 config.go:182] Loaded profile config "force-systemd-env-103026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 00:43:08.291514  245026 config.go:182] Loaded profile config "kubernetes-upgrade-748628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 00:43:08.291630  245026 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:43:08.317245  245026 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
	I0717 00:43:08.317439  245026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:43:08.370562  245026 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:66 SystemTime:2024-07-17 00:43:08.358987656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647951872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 00:43:08.370711  245026 docker.go:307] overlay module found
	I0717 00:43:08.374238  245026 out.go:177] * Using the docker driver based on user configuration
	I0717 00:43:08.376011  245026 start.go:297] selected driver: docker
	I0717 00:43:08.376049  245026 start.go:901] validating driver "docker" against <nil>
	I0717 00:43:08.376075  245026 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:43:08.378543  245026 out.go:177] 
	W0717 00:43:08.380225  245026 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 00:43:08.381790  245026 out.go:177] 

                                                
                                                
** /stderr **
E0717 00:43:08.817776   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-548380 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-548380" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 00:43:02 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-748628
contexts:
- context:
cluster: kubernetes-upgrade-748628
user: kubernetes-upgrade-748628
name: kubernetes-upgrade-748628
current-context: kubernetes-upgrade-748628
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-748628
user:
client-certificate: /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/kubernetes-upgrade-748628/client.crt
client-key: /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/kubernetes-upgrade-748628/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-548380

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-548380"

                                                
                                                
----------------------- debugLogs end: false-548380 [took: 2.810722177s] --------------------------------
helpers_test.go:175: Cleaning up "false-548380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-548380
--- PASS: TestNetworkPlugins/group/false (3.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-844823 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-844823 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m19.968906164s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-527297 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-527297 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m0.884102145s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-527297 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6bf6153a-c568-4ab0-a5ac-4c9342b32010] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6bf6153a-c568-4ab0-a5ac-4c9342b32010] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003524465s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-527297 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-527297 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-527297 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-527297 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-527297 --alsologtostderr -v=3: (11.854894461s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297: exit status 7 (63.338675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-527297 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (261.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-527297 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-527297 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m21.620929161s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (261.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-844823 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ecea34a4-cdbb-4913-9abc-bcea8f662c96] Pending
helpers_test.go:344: "busybox" [ecea34a4-cdbb-4913-9abc-bcea8f662c96] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ecea34a4-cdbb-4913-9abc-bcea8f662c96] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003723295s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-844823 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-844823 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-844823 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-844823 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-844823 --alsologtostderr -v=3: (11.828286864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-844823 -n old-k8s-version-844823
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-844823 -n old-k8s-version-844823: exit status 7 (64.668038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-844823 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (133.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-844823 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-844823 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m12.942403045s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-844823 -n old-k8s-version-844823
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (133.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-467746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-467746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m3.839681663s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-482569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-482569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (29.204829288s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-467746 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3fbbcc96-651a-4817-a561-df83ef14c9ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3fbbcc96-651a-4817-a561-df83ef14c9ee] Running
E0717 00:48:08.818687   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004291507s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-467746 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-467746 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-467746 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-467746 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-467746 --alsologtostderr -v=3: (11.927730881s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-467746 -n no-preload-467746
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-467746 -n no-preload-467746: exit status 7 (80.567704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-467746 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-467746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-467746 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (4m22.337149769s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-467746 -n no-preload-467746
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-482569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-482569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.128113708s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-482569 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-482569 --alsologtostderr -v=3: (1.224429924s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-482569 -n newest-cni-482569
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-482569 -n newest-cni-482569: exit status 7 (61.553389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-482569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-482569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-482569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (13.3948888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-482569 -n newest-cni-482569
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-482569 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-482569 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-482569 --alsologtostderr -v=1: (1.302084229s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-482569 -n newest-cni-482569
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-482569 -n newest-cni-482569: exit status 2 (333.334057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-482569 -n newest-cni-482569
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-482569 -n newest-cni-482569: exit status 2 (310.461299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-482569 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-482569 -n newest-cni-482569
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-482569 -n newest-cni-482569
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mkkrh" [04ea1e02-c52e-44fc-868c-e735e0001510] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004490191s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312676 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312676 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (56.942953839s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mkkrh" [04ea1e02-c52e-44fc-868c-e735e0001510] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014496563s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-844823 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-844823 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-844823 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-844823 --alsologtostderr -v=1: (1.016829841s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-844823 -n old-k8s-version-844823
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-844823 -n old-k8s-version-844823: exit status 2 (288.989957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-844823 -n old-k8s-version-844823
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-844823 -n old-k8s-version-844823: exit status 2 (344.586154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-844823 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-844823 -n old-k8s-version-844823
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-844823 -n old-k8s-version-844823
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (51.636750809s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wxf4n" [d4722752-18e4-49eb-8991-b12c13f4e9ad] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003455499s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wxf4n" [d4722752-18e4-49eb-8991-b12c13f4e9ad] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004088195s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-527297 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-527297 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-527297 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297: exit status 2 (294.437263ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297: exit status 2 (309.43674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-527297 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-527297 -n default-k8s-diff-port-527297
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312676 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72166586-bf95-439a-9189-27dad08ce929] Pending
helpers_test.go:344: "busybox" [72166586-bf95-439a-9189-27dad08ce929] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72166586-bf95-439a-9189-27dad08ce929] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004045344s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312676 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0717 00:49:50.861988   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:50.867099   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:50.877438   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:50.898116   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:50.938408   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:51.018731   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:51.179159   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:51.500143   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
E0717 00:49:52.141219   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (54.273674183s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-548380 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-548380 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qxsmr" [615593b4-1b67-4466-8579-f3cfd64f89df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 00:49:53.421455   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-qxsmr" [615593b4-1b67-4466-8579-f3cfd64f89df] Running
E0717 00:50:01.102336   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004041548s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-312676 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-312676 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.033873975s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-312676 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-312676 --alsologtostderr -v=3
E0717 00:49:55.982050   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-312676 --alsologtostderr -v=3: (12.225874315s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-548380 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312676 -n embed-certs-312676
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312676 -n embed-certs-312676: exit status 7 (73.645919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-312676 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312676 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 00:50:11.343436   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312676 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m22.965634943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312676 -n embed-certs-312676
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0717 00:50:31.824139   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/default-k8s-diff-port-527297/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.240005344s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-grhq2" [ad0329ec-057e-4f76-a31f-1b9ce26ee62c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004076308s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-548380 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-548380 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nwgqm" [d22fa6b3-95c5-473c-917b-5cd1e98e01bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nwgqm" [d22fa6b3-95c5-473c-917b-5cd1e98e01bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003380674s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-548380 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0717 00:51:20.130383   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/old-k8s-version-844823/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.385466996s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ss6jg" [32d5134c-09cb-432d-9011-058d05313a6b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004919335s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-548380 "pgrep -a kubelet"
E0717 00:51:30.371335   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/old-k8s-version-844823/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-548380 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nwx7t" [24c3fc06-226f-4749-8ae3-64aebf39cafc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nwx7t" [24c3fc06-226f-4749-8ae3-64aebf39cafc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003572668s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-548380 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (37.649408074s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-548380 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-548380 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6dtkn" [e3afad73-780d-4dec-a5ab-a8cce4049c20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6dtkn" [e3afad73-780d-4dec-a5ab-a8cce4049c20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004130821s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-548380 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-548380 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-548380 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rfnz9" [cd958e00-c179-4f53-9b83-b905292ca75b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rfnz9" [cd958e00-c179-4f53-9b83-b905292ca75b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00367747s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-548380 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-rsxw7" [73cc1d5e-03e8-47e1-8368-6ce145d42408] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006106082s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.815830393s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-rsxw7" [73cc1d5e-03e8-47e1-8368-6ce145d42408] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004808813s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-467746 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-467746 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-467746 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-467746 -n no-preload-467746
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-467746 -n no-preload-467746: exit status 2 (321.520016ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-467746 -n no-preload-467746
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-467746 -n no-preload-467746: exit status 2 (332.798657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-467746 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-467746 -n no-preload-467746
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-467746 -n no-preload-467746
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0717 00:53:05.675478   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
E0717 00:53:05.756004   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
E0717 00:53:05.916472   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
E0717 00:53:06.236820   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
E0717 00:53:06.877682   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
E0717 00:53:08.158590   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
E0717 00:53:08.818048   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/addons-957510/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-548380 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.265755959s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-96cfv" [11ab805e-2167-4ac2-b4fd-022de7b03733] Running
E0717 00:53:46.561524   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005097392s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-548380 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-548380 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xg8dz" [93e22f00-c101-4187-8dad-b9e1d1ac7c1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 00:53:53.734010   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/old-k8s-version-844823/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-xg8dz" [93e22f00-c101-4187-8dad-b9e1d1ac7c1e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003377051s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-548380 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-548380 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-548380 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bd7nl" [3269a23a-8947-4ae0-bdb6-31df79d88cbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bd7nl" [3269a23a-8947-4ae0-bdb6-31df79d88cbb] Running
E0717 00:54:27.522686   19483 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/no-preload-467746/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003309298s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-br6km" [06e32940-d278-441c-bd07-37d834a4522e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003603632s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-548380 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-548380 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-br6km" [06e32940-d278-441c-bd07-37d834a4522e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003720137s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-312676 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-312676 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-312676 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312676 -n embed-certs-312676
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312676 -n embed-certs-312676: exit status 2 (305.464061ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312676 -n embed-certs-312676
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312676 -n embed-certs-312676: exit status 2 (309.167881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-312676 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312676 -n embed-certs-312676
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312676 -n embed-certs-312676
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                    

Test skip (28/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-508536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-508536
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-548380 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-548380" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 00:43:02 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-748628
contexts:
- context:
cluster: kubernetes-upgrade-748628
user: kubernetes-upgrade-748628
name: kubernetes-upgrade-748628
current-context: kubernetes-upgrade-748628
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-748628
user:
client-certificate: /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/kubernetes-upgrade-748628/client.crt
client-key: /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/kubernetes-upgrade-748628/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-548380

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-548380"

                                                
                                                
----------------------- debugLogs end: kubenet-548380 [took: 3.002507755s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-548380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-548380
--- SKIP: TestNetworkPlugins/group/kubenet (3.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-548380 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-548380" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19265-12715/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Jul 2024 00:43:02 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-748628
contexts:
- context:
cluster: kubernetes-upgrade-748628
user: kubernetes-upgrade-748628
name: kubernetes-upgrade-748628
current-context: kubernetes-upgrade-748628
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-748628
user:
client-certificate: /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/kubernetes-upgrade-748628/client.crt
client-key: /home/jenkins/minikube-integration/19265-12715/.minikube/profiles/kubernetes-upgrade-748628/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-548380

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-548380" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-548380"

                                                
                                                
----------------------- debugLogs end: cilium-548380 [took: 3.359302414s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-548380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-548380
--- SKIP: TestNetworkPlugins/group/cilium (3.53s)

                                                
                                    
Copied to clipboard