Test Report: Docker_Linux_crio 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-15:36219
                    
                

Test fail (15/327)

x
+
TestAddons/parallel/Registry (73.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.604522ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-q5ztn" [d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002598506s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v7tht" [97f7a0a8-94e9-42f2-8e49-9731910d0d64] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003582711s
addons_test.go:342: (dbg) Run:  kubectl --context addons-022322 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-022322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-022322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.074829524s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-022322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 ip
2024/09/15 06:41:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-022322
helpers_test.go:235: (dbg) docker inspect addons-022322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982",
	        "Created": "2024-09-15T06:29:57.902403759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14686,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:29:58.035217085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/hostname",
	        "HostsPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/hosts",
	        "LogPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982-json.log",
	        "Name": "/addons-022322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-022322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-022322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2-init/diff:/var/lib/docker/overlay2/41629ade7f7315f2df14bde3ca812850a45d34be79d1a0e1cd0df4510f198eaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-022322",
	                "Source": "/var/lib/docker/volumes/addons-022322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-022322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-022322",
	                "name.minikube.sigs.k8s.io": "addons-022322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4341f423acc3b63be59cc1466a91768de2aedaeeb73f44de65907efa3e283439",
	            "SandboxKey": "/var/run/docker/netns/4341f423acc3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-022322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a799b0ec0fecd5a4bd23fbed4e9986ab3cc570dd08d36ddf5fd2808b6a2d36c8",
	                    "EndpointID": "55c8c593338908cf9c9befd1f38c515f233792dcedb45ab4037d822354db546e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-022322",
	                        "f987f02b7bf0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-022322 -n addons-022322
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 logs -n 25: (1.244347107s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-319436   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-319436              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-319436              | download-only-319436   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | -o=json --download-only              | download-only-993247   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-993247              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-993247              | download-only-993247   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-319436              | download-only-319436   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-993247              | download-only-993247   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | download-docker-583228 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | download-docker-583228               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-583228            | download-docker-583228 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | binary-mirror-350163   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | binary-mirror-350163                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33455               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-350163              | binary-mirror-350163   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| addons  | enable dashboard -p                  | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| start   | -p addons-022322 --wait=true         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:32 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:40 UTC | 15 Sep 24 06:40 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:40 UTC | 15 Sep 24 06:40 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ssh     | addons-022322 ssh curl -s            | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| addons  | addons-022322 addons                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-022322 addons                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| ip      | addons-022322 ip                     | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:34.409975   13892 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:34.410248   13892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.410258   13892 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:34.410265   13892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.410441   13892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:29:34.411031   13892 out.go:352] Setting JSON to false
	I0915 06:29:34.411877   13892 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":725,"bootTime":1726381049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:29:34.411966   13892 start.go:139] virtualization: kvm guest
	I0915 06:29:34.414135   13892 out.go:177] * [addons-022322] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:29:34.415403   13892 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:29:34.415427   13892 notify.go:220] Checking for updates...
	I0915 06:29:34.417886   13892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:34.419006   13892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:29:34.420065   13892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:29:34.421040   13892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:29:34.422082   13892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:29:34.423276   13892 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:34.444416   13892 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:34.444507   13892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.493618   13892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 06:29:34.484777495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:34.493719   13892 docker.go:318] overlay module found
	I0915 06:29:34.495531   13892 out.go:177] * Using the docker driver based on user configuration
	I0915 06:29:34.496714   13892 start.go:297] selected driver: docker
	I0915 06:29:34.496727   13892 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:34.496737   13892 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:29:34.497458   13892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.540933   13892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 06:29:34.532425836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:34.541099   13892 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:34.541411   13892 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:29:34.543067   13892 out.go:177] * Using Docker driver with root privileges
	I0915 06:29:34.544470   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:29:34.544531   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:29:34.544548   13892 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:29:34.544621   13892 start.go:340] cluster config:
	{Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:34.546120   13892 out.go:177] * Starting "addons-022322" primary control-plane node in "addons-022322" cluster
	I0915 06:29:34.547257   13892 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 06:29:34.548470   13892 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:29:34.549705   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:29:34.549737   13892 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:29:34.549743   13892 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:34.549740   13892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:29:34.549818   13892 preload.go:172] Found /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 06:29:34.549828   13892 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:29:34.550188   13892 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json ...
	I0915 06:29:34.550215   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json: {Name:mk75eadabcf88a1e80943e1d313c0ac3326c2ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:29:34.564904   13892 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:34.565023   13892 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:29:34.565042   13892 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:29:34.565047   13892 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:29:34.565054   13892 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:29:34.565061   13892 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:29:46.068469   13892 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:29:46.068505   13892 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:29:46.068552   13892 start.go:360] acquireMachinesLock for addons-022322: {Name:mk8cc43910e6fc14b57d745cb90cbe44d561ca46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:29:46.068638   13892 start.go:364] duration metric: took 67.597µs to acquireMachinesLock for "addons-022322"
	I0915 06:29:46.068659   13892 start.go:93] Provisioning new machine with config: &{Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:29:46.068733   13892 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:29:46.070467   13892 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:29:46.070716   13892 start.go:159] libmachine.API.Create for "addons-022322" (driver="docker")
	I0915 06:29:46.070750   13892 client.go:168] LocalClient.Create starting
	I0915 06:29:46.070843   13892 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem
	I0915 06:29:46.153955   13892 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem
	I0915 06:29:46.229474   13892 cli_runner.go:164] Run: docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:29:46.245025   13892 cli_runner.go:211] docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:29:46.245103   13892 network_create.go:284] running [docker network inspect addons-022322] to gather additional debugging logs...
	I0915 06:29:46.245124   13892 cli_runner.go:164] Run: docker network inspect addons-022322
	W0915 06:29:46.260140   13892 cli_runner.go:211] docker network inspect addons-022322 returned with exit code 1
	I0915 06:29:46.260172   13892 network_create.go:287] error running [docker network inspect addons-022322]: docker network inspect addons-022322: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-022322 not found
	I0915 06:29:46.260189   13892 network_create.go:289] output of [docker network inspect addons-022322]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-022322 not found
	
	** /stderr **
	I0915 06:29:46.260306   13892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:29:46.275634   13892 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000722ff0}
	I0915 06:29:46.275681   13892 network_create.go:124] attempt to create docker network addons-022322 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:29:46.275724   13892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-022322 addons-022322
	I0915 06:29:46.333701   13892 network_create.go:108] docker network addons-022322 192.168.49.0/24 created
	I0915 06:29:46.333733   13892 kic.go:121] calculated static IP "192.168.49.2" for the "addons-022322" container
	I0915 06:29:46.333805   13892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:29:46.348257   13892 cli_runner.go:164] Run: docker volume create addons-022322 --label name.minikube.sigs.k8s.io=addons-022322 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:29:46.364683   13892 oci.go:103] Successfully created a docker volume addons-022322
	I0915 06:29:46.364749   13892 cli_runner.go:164] Run: docker run --rm --name addons-022322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --entrypoint /usr/bin/test -v addons-022322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:29:53.558650   13892 cli_runner.go:217] Completed: docker run --rm --name addons-022322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --entrypoint /usr/bin/test -v addons-022322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (7.19385898s)
	I0915 06:29:53.558683   13892 oci.go:107] Successfully prepared a docker volume addons-022322
	I0915 06:29:53.558702   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:29:53.558719   13892 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:29:53.558765   13892 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-022322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:29:57.843175   13892 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-022322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.284379385s)
	I0915 06:29:57.843202   13892 kic.go:203] duration metric: took 4.284480255s to extract preloaded images to volume ...
	W0915 06:29:57.843320   13892 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:29:57.843484   13892 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:29:57.888235   13892 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-022322 --name addons-022322 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-022322 --network addons-022322 --ip 192.168.49.2 --volume addons-022322:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:29:58.195371   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Running}}
	I0915 06:29:58.213384   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.231552   13892 cli_runner.go:164] Run: docker exec addons-022322 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:29:58.274993   13892 oci.go:144] the created container "addons-022322" has a running status.
	I0915 06:29:58.275022   13892 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa...
	I0915 06:29:58.414826   13892 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:29:58.438897   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.455371   13892 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:29:58.455390   13892 kic_runner.go:114] Args: [docker exec --privileged addons-022322 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:29:58.500533   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.517370   13892 machine.go:93] provisionDockerMachine start ...
	I0915 06:29:58.517454   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:29:58.541070   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:29:58.541337   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:29:58.541359   13892 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:29:58.542136   13892 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45940->127.0.0.1:32768: read: connection reset by peer
	I0915 06:30:01.671607   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-022322
	
	I0915 06:30:01.671636   13892 ubuntu.go:169] provisioning hostname "addons-022322"
	I0915 06:30:01.671686   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:01.688450   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:01.688643   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:01.688659   13892 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-022322 && echo "addons-022322" | sudo tee /etc/hostname
	I0915 06:30:01.830097   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-022322
	
	I0915 06:30:01.830160   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:01.847238   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:01.847398   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:01.847416   13892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-022322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-022322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-022322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:30:01.976277   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:01.976304   13892 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-5979/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-5979/.minikube}
	I0915 06:30:01.976347   13892 ubuntu.go:177] setting up certificates
	I0915 06:30:01.976360   13892 provision.go:84] configureAuth start
	I0915 06:30:01.976418   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:01.992863   13892 provision.go:143] copyHostCerts
	I0915 06:30:01.992932   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/ca.pem (1082 bytes)
	I0915 06:30:01.993032   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/cert.pem (1123 bytes)
	I0915 06:30:01.993090   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/key.pem (1679 bytes)
	I0915 06:30:01.993138   13892 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem org=jenkins.addons-022322 san=[127.0.0.1 192.168.49.2 addons-022322 localhost minikube]
	I0915 06:30:02.152480   13892 provision.go:177] copyRemoteCerts
	I0915 06:30:02.152547   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:30:02.152581   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.169072   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.264370   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:30:02.285061   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:30:02.305376   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 06:30:02.325505   13892 provision.go:87] duration metric: took 349.132448ms to configureAuth
	I0915 06:30:02.325532   13892 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:30:02.325690   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:02.325794   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.342353   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:02.342515   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:02.342529   13892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:30:02.557166   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:30:02.557186   13892 machine.go:96] duration metric: took 4.039795692s to provisionDockerMachine
	I0915 06:30:02.557198   13892 client.go:171] duration metric: took 16.486440184s to LocalClient.Create
	I0915 06:30:02.557211   13892 start.go:167] duration metric: took 16.486496436s to libmachine.API.Create "addons-022322"
	I0915 06:30:02.557220   13892 start.go:293] postStartSetup for "addons-022322" (driver="docker")
	I0915 06:30:02.557232   13892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:30:02.557296   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:30:02.557345   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.573470   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.668798   13892 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:30:02.671706   13892 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:30:02.671735   13892 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:30:02.671743   13892 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:30:02.671751   13892 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:30:02.671763   13892 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-5979/.minikube/addons for local assets ...
	I0915 06:30:02.671828   13892 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-5979/.minikube/files for local assets ...
	I0915 06:30:02.671860   13892 start.go:296] duration metric: took 114.633114ms for postStartSetup
	I0915 06:30:02.672224   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:02.688735   13892 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json ...
	I0915 06:30:02.688986   13892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:30:02.689026   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.704764   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.792641   13892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:30:02.797055   13892 start.go:128] duration metric: took 16.728306999s to createHost
	I0915 06:30:02.797078   13892 start.go:83] releasing machines lock for "addons-022322", held for 16.728428922s
	I0915 06:30:02.797129   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:02.813813   13892 ssh_runner.go:195] Run: cat /version.json
	I0915 06:30:02.813860   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.813912   13892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:30:02.813966   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.831602   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.832784   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.923562   13892 ssh_runner.go:195] Run: systemctl --version
	I0915 06:30:02.995566   13892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:30:03.130869   13892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:30:03.134959   13892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:03.151986   13892 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:30:03.152064   13892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:03.177621   13892 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:30:03.177641   13892 start.go:495] detecting cgroup driver to use...
	I0915 06:30:03.177677   13892 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:30:03.177720   13892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:30:03.191256   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:30:03.200792   13892 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:30:03.200832   13892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:30:03.212398   13892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:30:03.224680   13892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:30:03.296606   13892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:30:03.380521   13892 docker.go:233] disabling docker service ...
	I0915 06:30:03.380577   13892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:30:03.397309   13892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:30:03.407246   13892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:30:03.479912   13892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:30:03.557251   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:30:03.567181   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:03.580975   13892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:30:03.581028   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.589417   13892 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:30:03.589475   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.597938   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.606431   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.614878   13892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:30:03.622833   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.630960   13892 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.644352   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.652628   13892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:30:03.659670   13892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:30:03.666698   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:03.739739   13892 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:30:03.813327   13892 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:30:03.813394   13892 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:30:03.816594   13892 start.go:563] Will wait 60s for crictl version
	I0915 06:30:03.816637   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:30:03.819439   13892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:30:03.850136   13892 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 06:30:03.850230   13892 ssh_runner.go:195] Run: crio --version
	I0915 06:30:03.884035   13892 ssh_runner.go:195] Run: crio --version
	I0915 06:30:03.917786   13892 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 06:30:03.918938   13892 cli_runner.go:164] Run: docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:30:03.934390   13892 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:30:03.937713   13892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:03.947346   13892 kubeadm.go:883] updating cluster {Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:30:03.947459   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:03.947520   13892 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:04.005083   13892 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:30:04.005102   13892 crio.go:433] Images already preloaded, skipping extraction
	I0915 06:30:04.005148   13892 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:04.035478   13892 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:30:04.035500   13892 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:30:04.035509   13892 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0915 06:30:04.035628   13892 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-022322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:30:04.035702   13892 ssh_runner.go:195] Run: crio config
	I0915 06:30:04.075458   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:30:04.075479   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:30:04.075490   13892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:30:04.075516   13892 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-022322 NodeName:addons-022322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:30:04.075684   13892 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-022322"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:30:04.075747   13892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:30:04.083565   13892 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:30:04.083629   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:30:04.091035   13892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 06:30:04.106246   13892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:30:04.121787   13892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0915 06:30:04.137021   13892 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:30:04.139971   13892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:04.149279   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:04.219995   13892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:04.231563   13892 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322 for IP: 192.168.49.2
	I0915 06:30:04.231583   13892 certs.go:194] generating shared ca certs ...
	I0915 06:30:04.231604   13892 certs.go:226] acquiring lock for ca certs: {Name:mkdad922548833f717724234d3dfea667af688cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.231715   13892 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key
	I0915 06:30:04.327854   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt ...
	I0915 06:30:04.327883   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt: {Name:mk88553ea6fe6b3bbcddbaf5fb4399b9d57d5f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.328061   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key ...
	I0915 06:30:04.328080   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key: {Name:mk24979239a9d34f46352c8e1b862a8e1f67ff74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.328180   13892 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key
	I0915 06:30:04.431987   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt ...
	I0915 06:30:04.432015   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt: {Name:mk51bec24258c7187bbcfbda02cab37b09aca3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.432183   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key ...
	I0915 06:30:04.432194   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key: {Name:mk16f3436fddecb64c7b08ccd6fc72cd1ef1fcbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.432279   13892 certs.go:256] generating profile certs ...
	I0915 06:30:04.432331   13892 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key
	I0915 06:30:04.432352   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt with IP's: []
	I0915 06:30:04.586803   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt ...
	I0915 06:30:04.586831   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: {Name:mked263498a55efc2d51dcfb8a63fb9ec85dbcce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.586983   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key ...
	I0915 06:30:04.586993   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key: {Name:mk512a1e1959bb23fe8a38640e6f78daabedd436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.587058   13892 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91
	I0915 06:30:04.587076   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:30:04.750681   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 ...
	I0915 06:30:04.750707   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91: {Name:mkee5aa0fd2cbaa659cee7dc8b42df64402edc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.750854   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91 ...
	I0915 06:30:04.750867   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91: {Name:mk1e30234ffaa908afe95a4568f6afb8dd531545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.750937   13892 certs.go:381] copying /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 -> /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt
	I0915 06:30:04.751005   13892 certs.go:385] copying /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91 -> /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key
	I0915 06:30:04.751050   13892 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key
	I0915 06:30:04.751065   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt with IP's: []
	I0915 06:30:04.940019   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt ...
	I0915 06:30:04.940043   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt: {Name:mk350f05c318062bf8390e5793e0bce85435f32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.940196   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key ...
	I0915 06:30:04.940224   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key: {Name:mk6d8d46803827bdaeae91eab214ce101c0c0420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.940408   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem (1679 bytes)
	I0915 06:30:04.940441   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:30:04.940467   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:30:04.940491   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem (1679 bytes)
	I0915 06:30:04.941035   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:30:04.963000   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:30:04.983402   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:30:05.003697   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:30:05.024132   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:30:05.043937   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:30:05.063970   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:30:05.084090   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:30:05.104158   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:30:05.125016   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:30:05.140478   13892 ssh_runner.go:195] Run: openssl version
	I0915 06:30:05.145206   13892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:30:05.153254   13892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.156142   13892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:30 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.156185   13892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.162089   13892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:30:05.169807   13892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:30:05.172461   13892 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:30:05.172500   13892 kubeadm.go:392] StartCluster: {Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:05.172563   13892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:30:05.172600   13892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:30:05.202825   13892 cri.go:89] found id: ""
	I0915 06:30:05.202888   13892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:30:05.210535   13892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:30:05.217839   13892 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:30:05.217879   13892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:30:05.225045   13892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:30:05.225061   13892 kubeadm.go:157] found existing configuration files:
	
	I0915 06:30:05.225099   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:30:05.232105   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:30:05.232161   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:30:05.238944   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:30:05.245833   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:30:05.245876   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:30:05.252619   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:30:05.259724   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:30:05.259769   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:30:05.266638   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:30:05.273591   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:30:05.273634   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:30:05.280379   13892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:30:05.310747   13892 kubeadm.go:310] W0915 06:30:05.310080    1295 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:05.311052   13892 kubeadm.go:310] W0915 06:30:05.310582    1295 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:05.327784   13892 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0915 06:30:05.372778   13892 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:30:15.409306   13892 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:30:15.409389   13892 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:30:15.409512   13892 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:30:15.409605   13892 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0915 06:30:15.409650   13892 kubeadm.go:310] OS: Linux
	I0915 06:30:15.409729   13892 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:30:15.409811   13892 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:30:15.409885   13892 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:30:15.409961   13892 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:30:15.410028   13892 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:30:15.410096   13892 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:30:15.410154   13892 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:30:15.410224   13892 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:30:15.410283   13892 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:30:15.410362   13892 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:30:15.410462   13892 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:30:15.410539   13892 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:30:15.410605   13892 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:30:15.412349   13892 out.go:235]   - Generating certificates and keys ...
	I0915 06:30:15.412446   13892 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:30:15.412504   13892 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:30:15.412593   13892 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:30:15.412685   13892 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:30:15.412743   13892 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:30:15.412790   13892 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:30:15.412843   13892 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:30:15.412979   13892 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-022322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:15.413045   13892 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:30:15.413211   13892 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-022322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:15.413278   13892 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:30:15.413348   13892 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:30:15.413417   13892 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:30:15.413497   13892 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:30:15.413543   13892 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:30:15.413596   13892 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:30:15.413651   13892 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:30:15.413711   13892 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:30:15.413763   13892 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:30:15.413833   13892 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:30:15.413920   13892 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:30:15.415294   13892 out.go:235]   - Booting up control plane ...
	I0915 06:30:15.415383   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:30:15.415472   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:30:15.415571   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:30:15.415674   13892 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:30:15.415751   13892 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:30:15.415785   13892 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:30:15.415945   13892 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:30:15.416086   13892 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:30:15.416138   13892 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00131336s
	I0915 06:30:15.416214   13892 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:30:15.416267   13892 kubeadm.go:310] [api-check] The API server is healthy after 4.0019115s
	I0915 06:30:15.416369   13892 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:30:15.416471   13892 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:30:15.416520   13892 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:30:15.416688   13892 kubeadm.go:310] [mark-control-plane] Marking the node addons-022322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:30:15.416769   13892 kubeadm.go:310] [bootstrap-token] Using token: qtz71d.xvu8oxfcrox05ula
	I0915 06:30:15.418849   13892 out.go:235]   - Configuring RBAC rules ...
	I0915 06:30:15.418964   13892 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:30:15.419059   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:30:15.419214   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:30:15.419359   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:30:15.419468   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:30:15.419543   13892 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:30:15.419648   13892 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:30:15.419706   13892 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:30:15.419754   13892 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:30:15.419760   13892 kubeadm.go:310] 
	I0915 06:30:15.419809   13892 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:30:15.419820   13892 kubeadm.go:310] 
	I0915 06:30:15.419907   13892 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:30:15.419917   13892 kubeadm.go:310] 
	I0915 06:30:15.419949   13892 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:30:15.420041   13892 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:30:15.420120   13892 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:30:15.420127   13892 kubeadm.go:310] 
	I0915 06:30:15.420230   13892 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:30:15.420239   13892 kubeadm.go:310] 
	I0915 06:30:15.420279   13892 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:30:15.420288   13892 kubeadm.go:310] 
	I0915 06:30:15.420336   13892 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:30:15.420404   13892 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:30:15.420486   13892 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:30:15.420494   13892 kubeadm.go:310] 
	I0915 06:30:15.420609   13892 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:30:15.420683   13892 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:30:15.420688   13892 kubeadm.go:310] 
	I0915 06:30:15.420761   13892 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qtz71d.xvu8oxfcrox05ula \
	I0915 06:30:15.420863   13892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b6fa81cefa24e7bb86a72fc94b64425479c808b0a0b074c57900fb8f22ced41 \
	I0915 06:30:15.420883   13892 kubeadm.go:310] 	--control-plane 
	I0915 06:30:15.420892   13892 kubeadm.go:310] 
	I0915 06:30:15.420975   13892 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:30:15.420984   13892 kubeadm.go:310] 
	I0915 06:30:15.421055   13892 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qtz71d.xvu8oxfcrox05ula \
	I0915 06:30:15.421162   13892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b6fa81cefa24e7bb86a72fc94b64425479c808b0a0b074c57900fb8f22ced41 
	I0915 06:30:15.421174   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:30:15.421186   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:30:15.422864   13892 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 06:30:15.424157   13892 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 06:30:15.427756   13892 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 06:30:15.427770   13892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 06:30:15.443978   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 06:30:15.630994   13892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:30:15.631066   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:15.631098   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-022322 minikube.k8s.io/updated_at=2024_09_15T06_30_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-022322 minikube.k8s.io/primary=true
	I0915 06:30:15.637726   13892 ops.go:34] apiserver oom_adj: -16
	I0915 06:30:15.740354   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:16.241041   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:16.740787   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:17.240556   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:17.741154   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:18.240693   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:18.740996   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:19.241363   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:19.740837   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:20.241069   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:20.301913   13892 kubeadm.go:1113] duration metric: took 4.670906624s to wait for elevateKubeSystemPrivileges
	I0915 06:30:20.301953   13892 kubeadm.go:394] duration metric: took 15.129453888s to StartCluster
	I0915 06:30:20.301974   13892 settings.go:142] acquiring lock: {Name:mk6128dee5a1f201e20204fc9647ceb1f8837444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:20.302067   13892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:30:20.302410   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/kubeconfig: {Name:mkb9d32ea81cbb0fb472b94a2fbc3394fd0d5468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:20.302584   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:30:20.302603   13892 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:30:20.302674   13892 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:30:20.302780   13892 addons.go:69] Setting yakd=true in profile "addons-022322"
	I0915 06:30:20.302797   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:20.302809   13892 addons.go:234] Setting addon yakd=true in "addons-022322"
	I0915 06:30:20.302800   13892 addons.go:69] Setting ingress=true in profile "addons-022322"
	I0915 06:30:20.302811   13892 addons.go:69] Setting registry=true in profile "addons-022322"
	I0915 06:30:20.302830   13892 addons.go:234] Setting addon ingress=true in "addons-022322"
	I0915 06:30:20.302841   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302846   13892 addons.go:234] Setting addon registry=true in "addons-022322"
	I0915 06:30:20.302853   13892 addons.go:69] Setting default-storageclass=true in profile "addons-022322"
	I0915 06:30:20.302869   13892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-022322"
	I0915 06:30:20.302882   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302893   13892 addons.go:69] Setting metrics-server=true in profile "addons-022322"
	I0915 06:30:20.302896   13892 addons.go:69] Setting storage-provisioner=true in profile "addons-022322"
	I0915 06:30:20.302910   13892 addons.go:234] Setting addon storage-provisioner=true in "addons-022322"
	I0915 06:30:20.302915   13892 addons.go:234] Setting addon metrics-server=true in "addons-022322"
	I0915 06:30:20.302906   13892 addons.go:69] Setting inspektor-gadget=true in profile "addons-022322"
	I0915 06:30:20.302941   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302944   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302959   13892 addons.go:234] Setting addon inspektor-gadget=true in "addons-022322"
	I0915 06:30:20.302986   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.303062   13892 addons.go:69] Setting gcp-auth=true in profile "addons-022322"
	I0915 06:30:20.303085   13892 mustload.go:65] Loading cluster: addons-022322
	I0915 06:30:20.303201   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303250   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:20.303362   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303410   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303410   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303453   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303460   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303468   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303767   13892 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-022322"
	I0915 06:30:20.303787   13892 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-022322"
	I0915 06:30:20.303811   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302882   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.304488   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309326   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309817   13892 addons.go:69] Setting helm-tiller=true in profile "addons-022322"
	I0915 06:30:20.309849   13892 addons.go:234] Setting addon helm-tiller=true in "addons-022322"
	I0915 06:30:20.309887   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.310907   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.331963   13892 addons.go:69] Setting volcano=true in profile "addons-022322"
	I0915 06:30:20.332020   13892 addons.go:234] Setting addon volcano=true in "addons-022322"
	I0915 06:30:20.332067   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.332190   13892 addons.go:69] Setting cloud-spanner=true in profile "addons-022322"
	I0915 06:30:20.332222   13892 addons.go:234] Setting addon cloud-spanner=true in "addons-022322"
	I0915 06:30:20.332251   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.332716   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.332771   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.302869   13892 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-022322"
	I0915 06:30:20.333031   13892 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-022322"
	I0915 06:30:20.333380   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.333586   13892 addons.go:69] Setting ingress-dns=true in profile "addons-022322"
	I0915 06:30:20.333604   13892 addons.go:234] Setting addon ingress-dns=true in "addons-022322"
	I0915 06:30:20.333652   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.334281   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.334862   13892 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-022322"
	I0915 06:30:20.334933   13892 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-022322"
	I0915 06:30:20.334982   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.335579   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309325   13892 out.go:177] * Verifying Kubernetes components...
	I0915 06:30:20.337960   13892 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:30:20.338463   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:20.338120   13892 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:30:20.338351   13892 addons.go:69] Setting volumesnapshots=true in profile "addons-022322"
	I0915 06:30:20.338628   13892 addons.go:234] Setting addon volumesnapshots=true in "addons-022322"
	I0915 06:30:20.339467   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.339891   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:30:20.339905   13892 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:30:20.339941   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.342092   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.342452   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:30:20.342525   13892 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:30:20.342607   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.342971   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.346120   13892 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:30:20.347336   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:30:20.348659   13892 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:30:20.348674   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:20.348704   13892 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:30:20.349046   13892 addons.go:234] Setting addon default-storageclass=true in "addons-022322"
	I0915 06:30:20.349207   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.349642   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.351436   13892 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:30:20.351456   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:30:20.351509   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.352633   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:20.354116   13892 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:20.354130   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:30:20.354167   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.357730   13892 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:20.357783   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:30:20.357860   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.358885   13892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:30:20.360491   13892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:20.360511   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:30:20.360581   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.366477   13892 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:30:20.367705   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:30:20.367726   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:30:20.367773   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.373892   13892 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:20.373916   13892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:30:20.373975   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.401143   13892 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-022322"
	I0915 06:30:20.401194   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.401670   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.404458   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:30:20.404531   13892 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:30:20.406526   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.412264   13892 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:30:20.412294   13892 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:30:20.412366   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.413394   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:30:20.414515   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W0915 06:30:20.415159   13892 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:30:20.416250   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.421239   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:30:20.425614   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:30:20.426998   13892 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:30:20.427153   13892 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:30:20.428255   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:30:20.428416   13892 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:20.428428   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:30:20.428481   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.428833   13892 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:20.428848   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:30:20.428892   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.430788   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.431752   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:30:20.431811   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:30:20.433923   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:30:20.433942   13892 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:30:20.433993   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.435738   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:30:20.437159   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:30:20.437177   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:30:20.437225   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.445942   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.448432   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.456319   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.457008   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.466588   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.470634   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.470670   13892 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:30:20.471780   13892 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:30:20.472972   13892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:20.472989   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:30:20.473040   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.475108   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:30:20.477999   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.481170   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.488919   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.489280   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.493975   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.729415   13892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:20.832138   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:30:20.832170   13892 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:30:20.842928   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:30:20.842956   13892 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:30:20.843447   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:20.845491   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:30:20.845517   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:30:20.935961   13892 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:30:20.935990   13892 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:30:21.020819   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:30:21.020845   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:30:21.022344   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:21.022633   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:21.028470   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:30:21.028540   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:30:21.036612   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:30:21.036638   13892 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:30:21.043861   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:21.044948   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:30:21.044984   13892 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:30:21.129298   13892 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:30:21.129392   13892 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:30:21.132074   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:21.136371   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:21.140305   13892 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:30:21.140374   13892 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:30:21.223515   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:30:21.223615   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:30:21.231836   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:30:21.231864   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:30:21.323974   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:21.323999   13892 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:30:21.324884   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:30:21.324911   13892 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:30:21.329210   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:30:21.335116   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:21.343630   13892 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:21.343660   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:30:21.423095   13892 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:30:21.423183   13892 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:30:21.439939   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:30:21.439989   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:30:21.521606   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:30:21.521696   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:30:21.537275   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:21.621192   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:30:21.621282   13892 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:30:21.724452   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:30:21.724539   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:30:21.737909   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:21.739858   13892 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:30:21.739880   13892 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:30:21.925913   13892 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.450763214s)
	I0915 06:30:21.925946   13892 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:30:21.927074   13892 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.197633331s)
	I0915 06:30:21.927844   13892 node_ready.go:35] waiting up to 6m0s for node "addons-022322" to be "Ready" ...
	I0915 06:30:21.938668   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:30:21.938695   13892 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:30:22.131212   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:22.131302   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:30:22.227350   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:30:22.227434   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:30:22.337579   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.494097126s)
	I0915 06:30:22.424841   13892 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:30:22.424937   13892 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:30:22.426572   13892 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:22.426594   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:30:22.441869   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:30:22.441902   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:30:22.625349   13892 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:30:22.625431   13892 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:30:22.625749   13892 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-022322" context rescaled to 1 replicas
	I0915 06:30:22.722472   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:22.737559   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:22.830732   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:30:22.830830   13892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:30:22.941338   13892 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:22.941417   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:30:23.037465   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:30:23.037557   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:30:23.131738   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:23.527823   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:30:23.527862   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:30:23.635288   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:23.635379   13892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:30:23.939243   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:23.941219   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:24.842837   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.820395677s)
	I0915 06:30:24.843012   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.820265268s)
	I0915 06:30:26.241838   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.19793849s)
	I0915 06:30:26.241872   13892 addons.go:475] Verifying addon ingress=true in "addons-022322"
	I0915 06:30:26.241927   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.109761671s)
	I0915 06:30:26.241965   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.105507724s)
	I0915 06:30:26.242074   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.912777866s)
	I0915 06:30:26.242143   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.906961401s)
	I0915 06:30:26.242274   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.704908207s)
	I0915 06:30:26.242305   13892 addons.go:475] Verifying addon metrics-server=true in "addons-022322"
	I0915 06:30:26.242321   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.504383825s)
	I0915 06:30:26.242338   13892 addons.go:475] Verifying addon registry=true in "addons-022322"
	I0915 06:30:26.242376   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.519818065s)
	I0915 06:30:26.243677   13892 out.go:177] * Verifying registry addon...
	I0915 06:30:26.243699   13892 out.go:177] * Verifying ingress addon...
	I0915 06:30:26.243677   13892 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-022322 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:30:26.245794   13892 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:30:26.246058   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:30:26.250360   13892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:30:26.250378   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:26.250570   13892 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:30:26.250588   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:26.430553   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:26.752630   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:26.753835   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:26.845459   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.107795434s)
	W0915 06:30:26.845502   13892 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:30:26.845528   13892 retry.go:31] will retry after 304.40675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:30:26.845567   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.713721026s)
	I0915 06:30:27.124607   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.18332755s)
	I0915 06:30:27.124648   13892 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-022322"
	I0915 06:30:27.126674   13892 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:30:27.128843   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:30:27.131216   13892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:30:27.131239   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:27.150966   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:27.248632   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:27.249242   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:27.566407   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:30:27.566474   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:27.584537   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:27.632194   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:27.750415   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:27.751081   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:27.841475   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:30:27.934260   13892 addons.go:234] Setting addon gcp-auth=true in "addons-022322"
	I0915 06:30:27.934313   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:27.934813   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:27.955612   13892 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:30:27.955667   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:27.970776   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:28.135033   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:28.249556   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:28.250273   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:28.430964   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:28.631563   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:28.748977   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:28.749552   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.132354   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:29.249177   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:29.249568   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.633236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:29.750088   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:29.750636   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.859251   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.708230768s)
	I0915 06:30:29.859418   13892 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.903782974s)
	I0915 06:30:29.861552   13892 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:30:29.863225   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:29.864891   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:30:29.864910   13892 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:30:29.925719   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:30:29.925740   13892 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:30:29.943867   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:30:29.943890   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:30:29.960393   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:30:30.132966   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:30.249143   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:30.249613   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:30.526051   13892 addons.go:475] Verifying addon gcp-auth=true in "addons-022322"
	I0915 06:30:30.527857   13892 out.go:177] * Verifying gcp-auth addon...
	I0915 06:30:30.530049   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:30:30.532704   13892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:30:30.532727   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:30.633796   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:30.749512   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:30.749926   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:30.930726   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:31.032992   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:31.132430   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:31.248998   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:31.249582   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:31.532095   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:31.631866   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:31.749423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:31.749735   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.033310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:32.131692   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:32.248944   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:32.249409   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.532440   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:32.632069   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:32.749426   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:32.749899   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.930811   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:33.033142   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:33.131445   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:33.249273   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:33.249696   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:33.533493   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:33.632131   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:33.749349   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:33.749683   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.033541   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:34.131638   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:34.249215   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:34.249571   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.533324   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:34.631916   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:34.749515   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:34.749960   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.931178   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:35.033423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:35.131815   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:35.249166   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:35.249432   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:35.532510   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:35.631903   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:35.749413   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:35.749752   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:36.032982   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:36.132490   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:36.248776   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:36.249119   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:36.533499   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:36.631988   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:36.749385   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:36.749758   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:37.033350   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:37.131770   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:37.249247   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:37.249628   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:37.430856   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:37.532843   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:37.632359   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:37.748704   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:37.749002   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:38.032752   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:38.132301   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:38.248619   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:38.249266   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:38.533360   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:38.631718   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:38.749031   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:38.749371   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:39.033571   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:39.132181   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:39.248407   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:39.248863   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:39.431113   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:39.533483   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:39.631970   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:39.749127   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:39.749498   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:40.032583   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:40.131976   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:40.249304   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:40.249738   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:40.533163   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:40.631473   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:40.748891   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:40.749468   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.032705   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:41.132285   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:41.248530   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:41.249032   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.533199   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:41.631596   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:41.748844   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:41.749922   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.931608   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:42.033113   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:42.131418   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:42.248812   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:42.249143   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:42.533306   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:42.631764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:42.748932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:42.749371   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:43.032478   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:43.131853   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:43.249088   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:43.249728   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:43.532884   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:43.632642   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:43.748599   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:43.749065   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:44.033602   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:44.132171   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:44.249344   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:44.249835   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:44.433599   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:44.532662   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:44.632181   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:44.748443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:44.748785   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:45.033368   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:45.131859   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:45.249263   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:45.249709   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:45.533096   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:45.631376   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:45.748955   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:45.749258   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.033511   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:46.132347   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:46.248739   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:46.249160   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.532647   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:46.632424   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:46.748779   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:46.749373   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.931183   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:47.033472   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:47.131786   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:47.249291   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:47.249573   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:47.533062   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:47.631443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:47.749019   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:47.749416   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:48.032697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:48.132659   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:48.249020   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:48.249401   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:48.532863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:48.632443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:48.748984   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:48.749413   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:49.032778   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:49.132449   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:49.248740   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:49.249158   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:49.430379   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:49.532894   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:49.632308   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:49.748689   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:49.749158   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:50.033151   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:50.131571   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:50.249014   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:50.249328   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:50.532829   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:50.632333   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:50.748757   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:50.749169   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:51.033369   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:51.131932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:51.249267   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:51.249658   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:51.430918   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:51.533471   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:51.632010   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:51.749072   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:51.749695   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:52.033468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:52.131895   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:52.249214   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:52.249830   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:52.533324   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:52.631661   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:52.749011   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:52.749470   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:53.033460   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:53.131849   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:53.249377   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:53.249709   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:53.431009   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:53.533596   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:53.632155   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:53.748462   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:53.748914   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:54.033214   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:54.131618   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:54.249008   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:54.249448   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:54.533042   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:54.632633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:54.748999   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:54.749588   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:55.033799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:55.132232   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:55.248600   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:55.248972   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:55.431132   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:55.533498   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:55.632249   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:55.748409   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:55.748799   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:56.033232   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:56.131633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:56.249087   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:56.249443   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:56.532853   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:56.632090   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:56.748878   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:56.748892   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:57.032670   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:57.132402   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:57.248887   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:57.249314   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:57.431495   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:57.532764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:57.632398   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:57.748750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:57.749249   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:58.032988   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:58.132605   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:58.248826   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:58.249443   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:58.533246   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:58.632466   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:58.748323   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:58.748971   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.033150   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:59.131282   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:59.248607   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:59.249030   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.533380   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:59.631811   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:59.749264   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:59.749909   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.930808   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:31:00.033110   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:00.131575   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:00.248601   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:00.248948   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:00.533625   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:00.632215   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:00.748540   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:00.749110   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:01.033691   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:01.132060   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:01.249399   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:01.249913   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:01.533411   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:01.631698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:01.749129   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:01.749394   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.032821   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:02.132265   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:02.248609   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:02.249248   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.431210   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:31:02.533582   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:02.632031   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:02.749318   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:02.749753   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.938686   13892 node_ready.go:49] node "addons-022322" has status "Ready":"True"
	I0915 06:31:02.938772   13892 node_ready.go:38] duration metric: took 41.010898206s for node "addons-022322" to be "Ready" ...
	I0915 06:31:02.938800   13892 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:02.947092   13892 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.037453   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:03.134905   13892 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:31:03.134932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:03.249093   13892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:31:03.249112   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:03.249662   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:03.534546   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:03.636557   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:03.751133   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:03.751759   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:03.952699   13892 pod_ready.go:93] pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.952725   13892 pod_ready.go:82] duration metric: took 1.005603448s for pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.952743   13892 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.956791   13892 pod_ready.go:93] pod "etcd-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.956833   13892 pod_ready.go:82] duration metric: took 4.073042ms for pod "etcd-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.956850   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.960877   13892 pod_ready.go:93] pod "kube-apiserver-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.960900   13892 pod_ready.go:82] duration metric: took 4.034597ms for pod "kube-apiserver-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.960911   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.965260   13892 pod_ready.go:93] pod "kube-controller-manager-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.965283   13892 pod_ready.go:82] duration metric: took 4.363575ms for pod "kube-controller-manager-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.965299   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gw7ff" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.033697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:04.132473   13892 pod_ready.go:93] pod "kube-proxy-gw7ff" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:04.132554   13892 pod_ready.go:82] duration metric: took 167.246699ms for pod "kube-proxy-gw7ff" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.132578   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.136244   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:04.251490   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:04.252243   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:04.533023   13892 pod_ready.go:93] pod "kube-scheduler-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:04.533103   13892 pod_ready.go:82] duration metric: took 400.506171ms for pod "kube-scheduler-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.533131   13892 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.533863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:04.634658   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:04.749985   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:04.750620   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.033858   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:05.133473   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:05.249607   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:05.250016   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.533512   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:05.633522   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:05.749567   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:05.750619   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.033337   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:06.132883   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.251011   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.251171   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.533695   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:06.537858   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:06.633310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.749666   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.750659   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.033710   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:07.133859   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.250107   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.250514   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.533553   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:07.633929   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.749698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.750015   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.033127   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:08.132358   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.249375   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.250351   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.533052   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:08.538331   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:08.632846   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.750600   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.751091   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.033893   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:09.133772   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.249846   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.250485   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.533541   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:09.634329   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.749468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.749927   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.032951   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:10.133703   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.249374   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:10.250142   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.533264   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:10.634824   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.749724   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:10.749950   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.033288   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:11.038713   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:11.133046   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.249103   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.249357   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.533301   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:11.632698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.749784   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.750069   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.033157   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:12.132818   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.249697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.250174   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.533250   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:12.633141   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.749453   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.749779   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.033165   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:13.132738   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.249754   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.250133   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.533097   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:13.537943   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:13.635262   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.749235   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.749608   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.033344   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:14.134224   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.250178   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.250386   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.532745   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:14.632274   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.749463   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.749574   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.032578   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:15.132543   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.249733   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.250131   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.533283   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:15.635694   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.749500   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.749903   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.033326   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:16.037154   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:16.132492   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.249928   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.250220   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.533621   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:16.633765   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.749606   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.750083   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.033424   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:17.133632   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.249099   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.249293   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.533944   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:17.635728   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.749747   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.749845   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.033242   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:18.133749   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.248979   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.249435   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.533485   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:18.537953   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:18.634427   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.749507   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.750729   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.033132   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:19.133614   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.250070   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.250669   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.533209   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:19.634429   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.749576   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.750000   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.033510   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:20.133879   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.250067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.250469   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.533633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:20.633286   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.749441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.749850   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.032951   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:21.037580   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:21.133010   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.249096   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.249327   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.533841   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:21.636703   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.750045   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.750258   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.033777   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:22.133441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.250313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.250819   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.533952   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:22.632273   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.749762   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.750018   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.033083   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:23.037994   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:23.133419   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.249942   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.250259   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.533730   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:23.633468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.749343   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.749675   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.034567   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:24.133677   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.249854   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.250284   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.533692   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:24.635572   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.749613   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.749916   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.033066   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:25.038206   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:25.132536   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.249706   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.250366   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.533750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:25.633778   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.750162   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.750492   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.032739   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:26.133178   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.249808   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.250389   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.533398   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:26.632980   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.749044   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.749242   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.033678   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:27.132456   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.249550   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.249778   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.532989   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:27.537774   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:27.632926   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.749383   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.749640   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.033168   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:28.132791   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.249100   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.249491   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.533927   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:28.633791   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.750246   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.750586   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.034176   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:29.134799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.326913   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.328515   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.533911   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:29.538178   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:29.634297   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.750998   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.751378   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.033198   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.133588   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.249814   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.250074   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.533173   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.634738   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.749679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.750305   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.033423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.133414   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.250044   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.251160   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.533304   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.633864   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.750141   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.750451   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.033133   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.037779   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:32.136313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.249954   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.250075   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.533300   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.633419   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.749736   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.749765   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.034007   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.133723   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.251986   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.252651   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.533521   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.632441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.749489   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.750028   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.033420   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.133332   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.249806   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.250249   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.534059   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.537695   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:34.633237   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.749972   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.750523   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.033433   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.134668   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.249067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.249280   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.533868   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.633700   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.751799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.752239   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.033863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.135788   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.261209   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.261484   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.534169   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.538356   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:36.635005   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.749444   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.749741   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.033143   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.134759   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.249201   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.249293   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.533999   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.633966   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.749679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.750282   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.034292   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.135654   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.248750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.249021   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.533563   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.538901   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:38.634050   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.750025   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.750354   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.033208   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.134881   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.250167   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.250578   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.533950   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.633617   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.749971   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.750223   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.033298   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.134948   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.249689   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.249968   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.533359   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.633818   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.749314   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.750010   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.033236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.037513   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:41.132679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.249029   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.249263   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.533936   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.633190   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.749449   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.749911   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.033106   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.133817   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.249836   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.250431   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.535637   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.633862   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.749067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.749419   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.033542   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.038254   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:43.132986   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.249533   13892 kapi.go:107] duration metric: took 1m17.003470316s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:31:43.249679   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.533132   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.635084   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.824289   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.034118   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.135800   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.250034   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.533788   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.634382   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.825384   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.035081   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.041001   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:45.134128   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.324267   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.532799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.634388   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.750074   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.033800   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.133411   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.249977   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.533385   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.633892   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.749200   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.033644   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.133340   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.254798   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.534822   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.538268   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:47.633121   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.750145   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.034050   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.133341   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.249584   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.534071   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.633605   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.749704   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.033188   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.134519   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.250183   13892 kapi.go:107] duration metric: took 1m23.00438592s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:31:49.533890   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.538762   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:49.635540   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.033558   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.134427   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.533564   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.633920   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.033803   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.133735   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.533829   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.632841   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.033313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.038094   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:52.133649   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.533764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.633086   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.033466   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.134242   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.533335   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.632408   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.033715   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.133140   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.533484   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.538357   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:54.633319   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.033334   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.135308   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.534278   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.632743   13892 kapi.go:107] duration metric: took 1m28.503900328s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:31:56.033022   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.533339   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.033408   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.037428   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:57.533745   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.033869   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.561194   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.033310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.037527   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:59.533635   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.033679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.533809   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.033525   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.532938   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.538141   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:02.033393   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.533588   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.033570   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.534054   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.538193   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:04.033637   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.533236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.033082   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.533172   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.033825   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.037689   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:06.533490   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.033488   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.533224   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.033746   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.038349   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:08.532934   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.035261   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.533246   13892 kapi.go:107] duration metric: took 1m39.003196071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:32:09.535024   13892 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-022322 cluster.
	I0915 06:32:09.536557   13892 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:32:09.537938   13892 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:32:09.539455   13892 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, helm-tiller, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0915 06:32:09.540834   13892 addons.go:510] duration metric: took 1m49.238162954s for enable addons: enabled=[default-storageclass storage-provisioner ingress-dns nvidia-device-plugin helm-tiller cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0915 06:32:10.055748   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:12.538990   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:15.038859   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:17.539022   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:20.038101   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:22.038820   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:23.537933   13892 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"True"
	I0915 06:32:23.537954   13892 pod_ready.go:82] duration metric: took 1m19.004805064s for pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.537962   13892 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.541840   13892 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace has status "Ready":"True"
	I0915 06:32:23.541860   13892 pod_ready.go:82] duration metric: took 3.891408ms for pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.541876   13892 pod_ready.go:39] duration metric: took 1m20.602996157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:32:23.541894   13892 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:32:23.541935   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:23.541985   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:23.576334   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:23.576356   13892 cri.go:89] found id: ""
	I0915 06:32:23.576365   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:23.576422   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.579515   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:23.579565   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:23.612826   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:23.612848   13892 cri.go:89] found id: ""
	I0915 06:32:23.612859   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:23.612912   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.615937   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:23.616004   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:23.648343   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:23.648362   13892 cri.go:89] found id: ""
	I0915 06:32:23.648370   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:23.648421   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.651502   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:23.651550   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:23.683263   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:23.683283   13892 cri.go:89] found id: ""
	I0915 06:32:23.683291   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:23.683342   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.686441   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:23.686492   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:23.718280   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:23.718303   13892 cri.go:89] found id: ""
	I0915 06:32:23.718311   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:23.718362   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.721633   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:23.721680   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:23.752697   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:23.752714   13892 cri.go:89] found id: ""
	I0915 06:32:23.752721   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:23.752768   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.755879   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:23.755942   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:23.787801   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:23.787820   13892 cri.go:89] found id: ""
	I0915 06:32:23.787826   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:23.787876   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.791129   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:23.791151   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:23.867026   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:23.867061   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:23.901983   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:23.902011   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:23.935110   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:23.935141   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:23.988900   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:23.988938   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:24.031371   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:24.031405   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:24.081347   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:24.081384   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:24.122044   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:24.122095   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:24.155921   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:24.155948   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:24.196166   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:24.196216   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:24.263412   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:24.263447   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:24.275361   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:24.275390   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:26.871834   13892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:32:26.884976   13892 api_server.go:72] duration metric: took 2m6.582339744s to wait for apiserver process to appear ...
	I0915 06:32:26.885002   13892 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:32:26.885037   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:26.885094   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:26.916059   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:26.916084   13892 cri.go:89] found id: ""
	I0915 06:32:26.916094   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:26.916150   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.919091   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:26.919141   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:26.950001   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:26.950025   13892 cri.go:89] found id: ""
	I0915 06:32:26.950041   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:26.950092   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.953219   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:26.953681   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:26.986623   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:26.986647   13892 cri.go:89] found id: ""
	I0915 06:32:26.986653   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:26.986697   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.989805   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:26.989862   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:27.020895   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:27.020916   13892 cri.go:89] found id: ""
	I0915 06:32:27.020923   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:27.020964   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.023987   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:27.024043   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:27.055667   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:27.055687   13892 cri.go:89] found id: ""
	I0915 06:32:27.055695   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:27.055736   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.058824   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:27.058872   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:27.090021   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:27.090042   13892 cri.go:89] found id: ""
	I0915 06:32:27.090049   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:27.090092   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.093202   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:27.093251   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:27.125406   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:27.125425   13892 cri.go:89] found id: ""
	I0915 06:32:27.125431   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:27.125470   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.128687   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:27.128708   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:27.221426   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:27.221463   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:27.264237   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:27.264271   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:27.310366   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:27.310397   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:27.343769   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:27.343796   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:27.374824   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:27.374856   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:27.430978   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:27.431014   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:27.466156   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:27.466183   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:27.534355   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:27.534389   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:27.572880   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:27.572907   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:27.650217   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:27.650248   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:27.689764   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:27.689790   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:30.201718   13892 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:32:30.205361   13892 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:32:30.206248   13892 api_server.go:141] control plane version: v1.31.1
	I0915 06:32:30.206274   13892 api_server.go:131] duration metric: took 3.321265546s to wait for apiserver health ...
	I0915 06:32:30.206281   13892 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:32:30.206300   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:30.206346   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:30.247576   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:30.247601   13892 cri.go:89] found id: ""
	I0915 06:32:30.247616   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:30.247665   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.251237   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:30.251299   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:30.337514   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:30.337535   13892 cri.go:89] found id: ""
	I0915 06:32:30.337542   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:30.337580   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.340694   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:30.340761   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:30.374248   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:30.374270   13892 cri.go:89] found id: ""
	I0915 06:32:30.374277   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:30.374315   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.377794   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:30.377865   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:30.447654   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:30.447678   13892 cri.go:89] found id: ""
	I0915 06:32:30.447687   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:30.447735   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.450965   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:30.451014   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:30.528575   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:30.528594   13892 cri.go:89] found id: ""
	I0915 06:32:30.528601   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:30.528652   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.532059   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:30.532122   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:30.566547   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:30.566565   13892 cri.go:89] found id: ""
	I0915 06:32:30.566572   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:30.566612   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.569834   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:30.569904   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:30.603072   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:30.603098   13892 cri.go:89] found id: ""
	I0915 06:32:30.603109   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:30.603155   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.606231   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:30.606251   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:30.617438   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:30.617461   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:30.726726   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:30.726754   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:30.759609   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:30.759631   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:30.814163   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:30.814196   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:30.848586   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:30.848611   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:30.889221   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:30.889248   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:30.955679   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:30.955711   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:31.010974   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:31.011012   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:31.062696   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:31.062727   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:31.097720   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:31.097751   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:31.139225   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:31.139253   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:33.738096   13892 system_pods.go:59] 19 kube-system pods found
	I0915 06:32:33.738132   13892 system_pods.go:61] "coredns-7c65d6cfc9-xrtf5" [3d071306-6186-47d8-a38c-c09d0565172e] Running
	I0915 06:32:33.738138   13892 system_pods.go:61] "csi-hostpath-attacher-0" [b9779b21-66d4-497b-95ca-d4e3bb1f440d] Running
	I0915 06:32:33.738143   13892 system_pods.go:61] "csi-hostpath-resizer-0" [d1de7650-462e-48b8-a7c4-d41806ea999d] Running
	I0915 06:32:33.738146   13892 system_pods.go:61] "csi-hostpathplugin-r87k9" [55f95c6b-c8ef-44a8-8502-9101b3c1a6bc] Running
	I0915 06:32:33.738149   13892 system_pods.go:61] "etcd-addons-022322" [47de0033-c753-46fa-8a91-f22a259be595] Running
	I0915 06:32:33.738153   13892 system_pods.go:61] "kindnet-wj66m" [54288115-3d96-4604-8d43-05eb4463ffa4] Running
	I0915 06:32:33.738156   13892 system_pods.go:61] "kube-apiserver-addons-022322" [6deaca10-4203-4248-8a4f-6d69cd208f8d] Running
	I0915 06:32:33.738159   13892 system_pods.go:61] "kube-controller-manager-addons-022322" [91941bbe-e2ca-4927-8822-171a063ffbe7] Running
	I0915 06:32:33.738162   13892 system_pods.go:61] "kube-ingress-dns-minikube" [5079ffa6-3a78-4f89-b9b1-96c20fca6fb6] Running
	I0915 06:32:33.738166   13892 system_pods.go:61] "kube-proxy-gw7ff" [e4cb2a76-ff95-4461-9c14-70ee381b42b0] Running
	I0915 06:32:33.738169   13892 system_pods.go:61] "kube-scheduler-addons-022322" [6afa8b86-1784-40cf-a887-1e69ffa32f03] Running
	I0915 06:32:33.738172   13892 system_pods.go:61] "metrics-server-84c5f94fbc-gv786" [f7898557-9596-4239-9fab-1fce4db35921] Running
	I0915 06:32:33.738175   13892 system_pods.go:61] "nvidia-device-plugin-daemonset-7x4t6" [549d014b-a13d-466e-8959-d22764717045] Running
	I0915 06:32:33.738179   13892 system_pods.go:61] "registry-66c9cd494c-q5ztn" [d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b] Running
	I0915 06:32:33.738182   13892 system_pods.go:61] "registry-proxy-v7tht" [97f7a0a8-94e9-42f2-8e49-9731910d0d64] Running
	I0915 06:32:33.738185   13892 system_pods.go:61] "snapshot-controller-56fcc65765-h6nwh" [4b24f9d0-a988-4767-96ad-bf7e26d377ef] Running
	I0915 06:32:33.738188   13892 system_pods.go:61] "snapshot-controller-56fcc65765-kndfm" [402c59b1-bcf6-4b08-9646-8a21aed37020] Running
	I0915 06:32:33.738191   13892 system_pods.go:61] "storage-provisioner" [10257ad9-5003-4e70-ab68-778fc1738cc4] Running
	I0915 06:32:33.738193   13892 system_pods.go:61] "tiller-deploy-b48cc5f79-tpczq" [e9d5480f-8c59-4ab5-b5fc-a6fcd1801c51] Running
	I0915 06:32:33.738198   13892 system_pods.go:74] duration metric: took 3.531911981s to wait for pod list to return data ...
	I0915 06:32:33.738204   13892 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:32:33.740398   13892 default_sa.go:45] found service account: "default"
	I0915 06:32:33.740416   13892 default_sa.go:55] duration metric: took 2.207623ms for default service account to be created ...
	I0915 06:32:33.740424   13892 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:32:33.748862   13892 system_pods.go:86] 19 kube-system pods found
	I0915 06:32:33.748886   13892 system_pods.go:89] "coredns-7c65d6cfc9-xrtf5" [3d071306-6186-47d8-a38c-c09d0565172e] Running
	I0915 06:32:33.748892   13892 system_pods.go:89] "csi-hostpath-attacher-0" [b9779b21-66d4-497b-95ca-d4e3bb1f440d] Running
	I0915 06:32:33.748896   13892 system_pods.go:89] "csi-hostpath-resizer-0" [d1de7650-462e-48b8-a7c4-d41806ea999d] Running
	I0915 06:32:33.748900   13892 system_pods.go:89] "csi-hostpathplugin-r87k9" [55f95c6b-c8ef-44a8-8502-9101b3c1a6bc] Running
	I0915 06:32:33.748903   13892 system_pods.go:89] "etcd-addons-022322" [47de0033-c753-46fa-8a91-f22a259be595] Running
	I0915 06:32:33.748907   13892 system_pods.go:89] "kindnet-wj66m" [54288115-3d96-4604-8d43-05eb4463ffa4] Running
	I0915 06:32:33.748912   13892 system_pods.go:89] "kube-apiserver-addons-022322" [6deaca10-4203-4248-8a4f-6d69cd208f8d] Running
	I0915 06:32:33.748915   13892 system_pods.go:89] "kube-controller-manager-addons-022322" [91941bbe-e2ca-4927-8822-171a063ffbe7] Running
	I0915 06:32:33.748919   13892 system_pods.go:89] "kube-ingress-dns-minikube" [5079ffa6-3a78-4f89-b9b1-96c20fca6fb6] Running
	I0915 06:32:33.748922   13892 system_pods.go:89] "kube-proxy-gw7ff" [e4cb2a76-ff95-4461-9c14-70ee381b42b0] Running
	I0915 06:32:33.748927   13892 system_pods.go:89] "kube-scheduler-addons-022322" [6afa8b86-1784-40cf-a887-1e69ffa32f03] Running
	I0915 06:32:33.748935   13892 system_pods.go:89] "metrics-server-84c5f94fbc-gv786" [f7898557-9596-4239-9fab-1fce4db35921] Running
	I0915 06:32:33.748939   13892 system_pods.go:89] "nvidia-device-plugin-daemonset-7x4t6" [549d014b-a13d-466e-8959-d22764717045] Running
	I0915 06:32:33.748946   13892 system_pods.go:89] "registry-66c9cd494c-q5ztn" [d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b] Running
	I0915 06:32:33.748949   13892 system_pods.go:89] "registry-proxy-v7tht" [97f7a0a8-94e9-42f2-8e49-9731910d0d64] Running
	I0915 06:32:33.748960   13892 system_pods.go:89] "snapshot-controller-56fcc65765-h6nwh" [4b24f9d0-a988-4767-96ad-bf7e26d377ef] Running
	I0915 06:32:33.748965   13892 system_pods.go:89] "snapshot-controller-56fcc65765-kndfm" [402c59b1-bcf6-4b08-9646-8a21aed37020] Running
	I0915 06:32:33.748970   13892 system_pods.go:89] "storage-provisioner" [10257ad9-5003-4e70-ab68-778fc1738cc4] Running
	I0915 06:32:33.748974   13892 system_pods.go:89] "tiller-deploy-b48cc5f79-tpczq" [e9d5480f-8c59-4ab5-b5fc-a6fcd1801c51] Running
	I0915 06:32:33.748983   13892 system_pods.go:126] duration metric: took 8.554163ms to wait for k8s-apps to be running ...
	I0915 06:32:33.748991   13892 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:32:33.749033   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:32:33.759914   13892 system_svc.go:56] duration metric: took 10.915717ms WaitForService to wait for kubelet
	I0915 06:32:33.759944   13892 kubeadm.go:582] duration metric: took 2m13.45731059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:32:33.759970   13892 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:32:33.762677   13892 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0915 06:32:33.762700   13892 node_conditions.go:123] node cpu capacity is 8
	I0915 06:32:33.762712   13892 node_conditions.go:105] duration metric: took 2.737031ms to run NodePressure ...
	I0915 06:32:33.762722   13892 start.go:241] waiting for startup goroutines ...
	I0915 06:32:33.762728   13892 start.go:246] waiting for cluster config update ...
	I0915 06:32:33.762743   13892 start.go:255] writing updated cluster config ...
	I0915 06:32:33.762994   13892 ssh_runner.go:195] Run: rm -f paused
	I0915 06:32:33.810544   13892 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:32:33.812783   13892 out.go:177] * Done! kubectl is now configured to use "addons-022322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:41:43 addons-022322 crio[1033]: time="2024-09-15 06:41:43.397584625Z" level=info msg="Ran pod sandbox 6a8634cadba9b086a2bbd8adf1faa560245c5a6d9d9bcbd3607992c51cfdf869 with infra container: local-path-storage/helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6/POD" id=46064b4e-f348-4fef-b37d-7452d9654821 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 15 06:41:43 addons-022322 crio[1033]: time="2024-09-15 06:41:43.398654952Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=d1e75142-17a2-4292-b146-d58d7c30f599 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:41:43 addons-022322 crio[1033]: time="2024-09-15 06:41:43.398947601Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=d1e75142-17a2-4292-b146-d58d7c30f599 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:41:43 addons-022322 crio[1033]: time="2024-09-15 06:41:43.399378048Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=c4666e2d-89e8-49d2-b3c5-97c4422c8611 name=/runtime.v1.ImageService/PullImage
	Sep 15 06:41:43 addons-022322 crio[1033]: time="2024-09-15 06:41:43.400785870Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 15 06:41:47 addons-022322 crio[1033]: time="2024-09-15 06:41:47.759644515Z" level=info msg="Stopping pod sandbox: d97dde6d749b6804d060e0ef1a639aa168e1ab445d7788074e996ec7ba656075" id=0bc89d86-f22c-4174-864f-9b1bea0b95cd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:41:47 addons-022322 crio[1033]: time="2024-09-15 06:41:47.759920904Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:d97dde6d749b6804d060e0ef1a639aa168e1ab445d7788074e996ec7ba656075 UID:97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4 NetNS:/var/run/netns/bc8bcb6d-8284-4927-9e57-87beb8b4a109 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:41:47 addons-022322 crio[1033]: time="2024-09-15 06:41:47.760042286Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:41:47 addons-022322 crio[1033]: time="2024-09-15 06:41:47.805750834Z" level=info msg="Stopped pod sandbox: d97dde6d749b6804d060e0ef1a639aa168e1ab445d7788074e996ec7ba656075" id=0bc89d86-f22c-4174-864f-9b1bea0b95cd name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.364252416Z" level=info msg="Stopping container: b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8 (timeout: 30s)" id=4723d9fd-255a-4809-a078-fdf0714df12e name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.372266337Z" level=info msg="Stopping container: 9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e (timeout: 30s)" id=510a8441-d284-4809-897c-fa1350a42cd1 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:41:48 addons-022322 conmon[4032]: conmon b3f6682bc1ef81c1526c <ninfo>: container 4044 exited with status 2
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.507988515Z" level=info msg="Stopped container b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8: kube-system/registry-66c9cd494c-q5ztn/registry" id=4723d9fd-255a-4809-a078-fdf0714df12e name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.508538698Z" level=info msg="Stopping pod sandbox: a2284fe8b2fa689b68f515331673a19e6902516ad915f4e2086f985a182f5412" id=334f97df-f583-41f3-8efd-2cf2ce79ec14 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.508815758Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-q5ztn Namespace:kube-system ID:a2284fe8b2fa689b68f515331673a19e6902516ad915f4e2086f985a182f5412 UID:d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b NetNS:/var/run/netns/4a39e5f6-bad4-4edc-af4a-9b1231c8e3d6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.508997352Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-q5ztn from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.514358334Z" level=info msg="Stopped container 9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e: kube-system/registry-proxy-v7tht/registry-proxy" id=510a8441-d284-4809-897c-fa1350a42cd1 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.514885593Z" level=info msg="Stopping pod sandbox: df48160c1a955d397c3fac3dcfb046e5bbbf17423b097d777f6a1f2575aaad91" id=0d1aff44-b87a-41d8-be88-3c3260aec936 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.520806924Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-DPDDQNF4DWD2J3DF - [0:0]\n:KUBE-HP-IC4TX6RUQ2PDL433 - [0:0]\n:KUBE-HP-WZN244WMAC2SFCZJ - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-rbq4t_ingress-nginx_7fb8df77-b72c-4e81-bfa1-e89a8f2286f9_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-WZN244WMAC2SFCZJ\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-rbq4t_ingress-nginx_7fb8df77-b72c-4e81-bfa1-e89a8f2286f9_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-IC4TX6RUQ2PDL433\n-A KUBE-HP-IC4TX6RUQ2PDL433 -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-rbq4t_ingress-nginx_7fb8df77-b72c-4e81-bfa1-e89a8f2286f9_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-IC4TX6RUQ2PDL433 -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-rbq4t_ingress-nginx_7fb8df77-b72c-4e81-bf
a1-e89a8f2286f9_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-A KUBE-HP-WZN244WMAC2SFCZJ -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-rbq4t_ingress-nginx_7fb8df77-b72c-4e81-bfa1-e89a8f2286f9_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-WZN244WMAC2SFCZJ -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-rbq4t_ingress-nginx_7fb8df77-b72c-4e81-bfa1-e89a8f2286f9_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-X KUBE-HP-DPDDQNF4DWD2J3DF\nCOMMIT\n"
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.523510218Z" level=info msg="Closing host port tcp:5000"
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.525080070Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.525228465Z" level=info msg="Got pod network &{Name:registry-proxy-v7tht Namespace:kube-system ID:df48160c1a955d397c3fac3dcfb046e5bbbf17423b097d777f6a1f2575aaad91 UID:97f7a0a8-94e9-42f2-8e49-9731910d0d64 NetNS:/var/run/netns/791862e9-dd2c-4767-917b-895aa521978c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.525348260Z" level=info msg="Deleting pod kube-system_registry-proxy-v7tht from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.552955836Z" level=info msg="Stopped pod sandbox: a2284fe8b2fa689b68f515331673a19e6902516ad915f4e2086f985a182f5412" id=334f97df-f583-41f3-8efd-2cf2ce79ec14 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:41:48 addons-022322 crio[1033]: time="2024-09-15 06:41:48.569522640Z" level=info msg="Stopped pod sandbox: df48160c1a955d397c3fac3dcfb046e5bbbf17423b097d777f6a1f2575aaad91" id=0d1aff44-b87a-41d8-be88-3c3260aec936 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	be7dda375439d       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              46 seconds ago      Running             nginx                      0                   d00635454c734       nginx
	ebf8a7f6a2815       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                   0                   fd3e91b2fb80d       gcp-auth-89d5ffd79-f42ql
	7b86d41c02550       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             10 minutes ago      Running             controller                 0                   ff90f27733177       ingress-nginx-controller-bc57996ff-rbq4t
	9e10770b5ea70       gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4              10 minutes ago      Exited              registry-proxy             0                   df48160c1a955       registry-proxy-v7tht
	837ba5352bdf5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              patch                      0                   2a98a5989fe40       ingress-nginx-admission-patch-9qczt
	e9a2cb31b721d       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   9e0f35d887e16       nvidia-device-plugin-daemonset-7x4t6
	8976c24f1a582       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              create                     0                   77e191d527a22       ingress-nginx-admission-create-kktzj
	a31e0f0167cc9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server             0                   a3eb6e2a55c01       metrics-server-84c5f94fbc-gv786
	b3f6682bc1ef8       docker.io/library/registry@sha256:5e8c7f954d64eb89a98a3f84b6dd1e1f4a9cf3d25e41575dd0a96d3e3363cba7                           10 minutes ago      Exited              registry                   0                   a2284fe8b2fa6       registry-66c9cd494c-q5ztn
	e02acb9daf95c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             10 minutes ago      Running             local-path-provisioner     0                   45ad5754c4627       local-path-provisioner-86d989889c-dmzqm
	5077f244ae470       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               10 minutes ago      Running             cloud-spanner-emulator     0                   d3139190afd54       cloud-spanner-emulator-769b77f747-xprfl
	b9b5e44789caa       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns       0                   9297294de83bf       kube-ingress-dns-minikube
	3e976270afdc6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                    0                   749159bde67b6       coredns-7c65d6cfc9-xrtf5
	f16ac41ad768c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner        0                   b981d61af6f0a       storage-provisioner
	8a93f6647ecee       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                             11 minutes ago      Running             kindnet-cni                0                   1db5bf8d5ef4a       kindnet-wj66m
	2357c6fca0125       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             11 minutes ago      Running             kube-proxy                 0                   ad944dd66325b       kube-proxy-gw7ff
	8cd403ba68b5e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             11 minutes ago      Running             etcd                       0                   3704996f909cf       etcd-addons-022322
	cd45634612a50       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             11 minutes ago      Running             kube-apiserver             0                   1b2ea9f7b9f0a       kube-apiserver-addons-022322
	793a3d9d3aa84       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             11 minutes ago      Running             kube-scheduler             0                   0d8125e8ef959       kube-scheduler-addons-022322
	b6d57c6bce9ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             11 minutes ago      Running             kube-controller-manager    0                   f6b2699e528bd       kube-controller-manager-addons-022322
	
	
	==> coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] <==
	[INFO] 10.244.0.18:53657 - 14329 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107885s
	[INFO] 10.244.0.18:57900 - 62309 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073666s
	[INFO] 10.244.0.18:57900 - 27259 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112682s
	[INFO] 10.244.0.18:51135 - 25280 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004514344s
	[INFO] 10.244.0.18:51135 - 65484 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005682544s
	[INFO] 10.244.0.18:37446 - 3615 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007024634s
	[INFO] 10.244.0.18:37446 - 35842 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008710763s
	[INFO] 10.244.0.18:58524 - 29672 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004764629s
	[INFO] 10.244.0.18:58524 - 27116 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007955396s
	[INFO] 10.244.0.18:36601 - 30204 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000108259s
	[INFO] 10.244.0.18:36601 - 46072 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175121s
	[INFO] 10.244.0.21:59154 - 7876 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214034s
	[INFO] 10.244.0.21:52693 - 54985 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000104888s
	[INFO] 10.244.0.21:53529 - 47590 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129252s
	[INFO] 10.244.0.21:51668 - 52873 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189752s
	[INFO] 10.244.0.21:47297 - 8172 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109168s
	[INFO] 10.244.0.21:45975 - 40007 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014485s
	[INFO] 10.244.0.21:52233 - 54039 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007424492s
	[INFO] 10.244.0.21:38833 - 7325 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.010412323s
	[INFO] 10.244.0.21:52331 - 57813 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00775984s
	[INFO] 10.244.0.21:56895 - 26084 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.015034445s
	[INFO] 10.244.0.21:50418 - 4446 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006952543s
	[INFO] 10.244.0.21:60979 - 46705 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008386519s
	[INFO] 10.244.0.21:44818 - 40867 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000749057s
	[INFO] 10.244.0.21:53307 - 22244 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000849441s
	
	
	==> describe nodes <==
	Name:               addons-022322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-022322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-022322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_30_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-022322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:30:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-022322
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:41:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:41:17 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:41:17 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:41:17 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:41:17 +0000   Sun, 15 Sep 2024 06:31:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-022322
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f53fbb4eb4047c3b38331dd58a0e17d
	  System UUID:                b20760c2-a565-423c-88fb-0ebf81478f0b
	  Boot ID:                    d7eb9d55-e244-423e-b0bb-fd0ad06c12bb
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-xprfl                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  gcp-auth                    gcp-auth-89d5ffd79-f42ql                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-rbq4t                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-xrtf5                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-addons-022322                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-wj66m                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-022322                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-022322                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-gw7ff                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-022322                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-gv786                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-7x4t6                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  local-path-storage          local-path-provisioner-86d989889c-dmzqm                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-022322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-022322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-022322 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-022322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-022322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-022322 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-022322 event: Registered Node addons-022322 in Controller
	  Normal   NodeReady                10m                kubelet          Node addons-022322 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.417057] i8042: Warning: Keylock active
	[  +0.007219] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003031] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000695] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000704] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000612] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000625] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.600975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.568733] kauditd_printk_skb: 46 callbacks suppressed
	[Sep15 06:41] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +1.004271] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +2.015809] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +4.127715] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +8.191377] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[ +16.126848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	
	
	==> etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] <==
	{"level":"info","ts":"2024-09-15T06:30:23.937675Z","caller":"traceutil/trace.go:171","msg":"trace[1964839114] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:452; }","duration":"112.53835ms","start":"2024-09-15T06:30:23.825128Z","end":"2024-09-15T06:30:23.937666Z","steps":["trace[1964839114] 'agreement among raft nodes before linearized reading'  (duration: 107.412384ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:23.937678Z","caller":"traceutil/trace.go:171","msg":"trace[1725845484] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:452; }","duration":"112.729433ms","start":"2024-09-15T06:30:23.824939Z","end":"2024-09-15T06:30:23.937668Z","steps":["trace[1725845484] 'agreement among raft nodes before linearized reading'  (duration: 107.731581ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.041975Z","caller":"traceutil/trace.go:171","msg":"trace[224570116] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"105.124293ms","start":"2024-09-15T06:30:23.936813Z","end":"2024-09-15T06:30:24.041937Z","steps":["trace[224570116] 'process raft request'  (duration: 96.117583ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.042456Z","caller":"traceutil/trace.go:171","msg":"trace[2126800427] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"104.80218ms","start":"2024-09-15T06:30:23.937643Z","end":"2024-09-15T06:30:24.042445Z","steps":["trace[2126800427] 'process raft request'  (duration: 104.169503ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.042842Z","caller":"traceutil/trace.go:171","msg":"trace[1060131522] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"103.35953ms","start":"2024-09-15T06:30:23.939467Z","end":"2024-09-15T06:30:24.042827Z","steps":["trace[1060131522] 'process raft request'  (duration: 103.268287ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.043080Z","caller":"traceutil/trace.go:171","msg":"trace[68935875] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:464; }","duration":"103.319532ms","start":"2024-09-15T06:30:23.939753Z","end":"2024-09-15T06:30:24.043073Z","steps":["trace[68935875] 'read index received'  (duration: 951.239µs)","trace[68935875] 'applied index is now lower than readState.Index'  (duration: 102.367501ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:30:24.043144Z","caller":"traceutil/trace.go:171","msg":"trace[1100533312] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"101.898324ms","start":"2024-09-15T06:30:23.941239Z","end":"2024-09-15T06:30:24.043137Z","steps":["trace[1100533312] 'process raft request'  (duration: 101.57996ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.043319Z","caller":"traceutil/trace.go:171","msg":"trace[1710169861] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"100.573964ms","start":"2024-09-15T06:30:23.942734Z","end":"2024-09-15T06:30:24.043308Z","steps":["trace[1710169861] 'process raft request'  (duration: 100.142047ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.044239Z","caller":"traceutil/trace.go:171","msg":"trace[430677801] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"101.345814ms","start":"2024-09-15T06:30:23.942848Z","end":"2024-09-15T06:30:24.044194Z","steps":["trace[430677801] 'process raft request'  (duration: 100.096168ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.044393Z","caller":"traceutil/trace.go:171","msg":"trace[1553361540] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"101.362761ms","start":"2024-09-15T06:30:23.943022Z","end":"2024-09-15T06:30:24.044385Z","steps":["trace[1553361540] 'process raft request'  (duration: 99.949501ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.043567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.801903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.044478Z","caller":"traceutil/trace.go:171","msg":"trace[303371796] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:460; }","duration":"104.72355ms","start":"2024-09-15T06:30:23.939748Z","end":"2024-09-15T06:30:24.044472Z","steps":["trace[303371796] 'agreement among raft nodes before linearized reading'  (duration: 103.480693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.631766Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.736254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.631932Z","caller":"traceutil/trace.go:171","msg":"trace[331691259] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:509; }","duration":"102.903775ms","start":"2024-09-15T06:30:24.528987Z","end":"2024-09-15T06:30:24.631891Z","steps":["trace[331691259] 'agreement among raft nodes before linearized reading'  (duration: 102.691356ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.724544Z","caller":"traceutil/trace.go:171","msg":"trace[1058426745] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"180.002052ms","start":"2024-09-15T06:30:24.544515Z","end":"2024-09-15T06:30:24.724517Z","steps":["trace[1058426745] 'process raft request'  (duration: 179.769517ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.724704Z","caller":"traceutil/trace.go:171","msg":"trace[911394532] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:522; }","duration":"179.289521ms","start":"2024-09-15T06:30:24.545401Z","end":"2024-09-15T06:30:24.724690Z","steps":["trace[911394532] 'read index received'  (duration: 92.846516ms)","trace[911394532] 'applied index is now lower than readState.Index'  (duration: 86.442357ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:30:24.724875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.201545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.724952Z","caller":"traceutil/trace.go:171","msg":"trace[1209499748] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:517; }","duration":"180.28811ms","start":"2024-09-15T06:30:24.544654Z","end":"2024-09-15T06:30:24.724942Z","steps":["trace[1209499748] 'agreement among raft nodes before linearized reading'  (duration: 180.146303ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.725112Z","caller":"traceutil/trace.go:171","msg":"trace[1655529344] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"180.345425ms","start":"2024-09-15T06:30:24.544758Z","end":"2024-09-15T06:30:24.725104Z","steps":["trace[1655529344] 'process raft request'  (duration: 179.83428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.725298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.355901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:30:24.725370Z","caller":"traceutil/trace.go:171","msg":"trace[994147516] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:517; }","duration":"180.432064ms","start":"2024-09-15T06:30:24.544929Z","end":"2024-09-15T06:30:24.725361Z","steps":["trace[994147516] 'agreement among raft nodes before linearized reading'  (duration: 180.340916ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:31:52.950195Z","caller":"traceutil/trace.go:171","msg":"trace[1140578157] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"103.841586ms","start":"2024-09-15T06:31:52.846338Z","end":"2024-09-15T06:31:52.950180Z","steps":["trace[1140578157] 'process raft request'  (duration: 103.740777ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:40:10.962662Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1592}
	{"level":"info","ts":"2024-09-15T06:40:10.985039Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1592,"took":"21.96857ms","hash":12553061,"current-db-size-bytes":6156288,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3473408,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-15T06:40:10.985077Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":12553061,"revision":1592,"compact-revision":-1}
	
	
	==> gcp-auth [ebf8a7f6a28156c5630a4cc474404dbbe134dc27b13486fc221e2c64f628f1f0] <==
	2024/09/15 06:32:08 GCP Auth Webhook started!
	2024/09/15 06:32:33 Ready to marshal response ...
	2024/09/15 06:32:33 Ready to write response ...
	2024/09/15 06:32:34 Ready to marshal response ...
	2024/09/15 06:32:34 Ready to write response ...
	2024/09/15 06:32:34 Ready to marshal response ...
	2024/09/15 06:32:34 Ready to write response ...
	2024/09/15 06:40:47 Ready to marshal response ...
	2024/09/15 06:40:47 Ready to write response ...
	2024/09/15 06:40:50 Ready to marshal response ...
	2024/09/15 06:40:50 Ready to write response ...
	2024/09/15 06:40:54 Ready to marshal response ...
	2024/09/15 06:40:54 Ready to write response ...
	2024/09/15 06:41:00 Ready to marshal response ...
	2024/09/15 06:41:00 Ready to write response ...
	2024/09/15 06:41:15 Ready to marshal response ...
	2024/09/15 06:41:15 Ready to write response ...
	2024/09/15 06:41:43 Ready to marshal response ...
	2024/09/15 06:41:43 Ready to write response ...
	2024/09/15 06:41:43 Ready to marshal response ...
	2024/09/15 06:41:43 Ready to write response ...
	
	
	==> kernel <==
	 06:41:49 up 24 min,  0 users,  load average: 0.17, 0.25, 0.29
	Linux addons-022322 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] <==
	I0915 06:39:42.741879       1 main.go:299] handling current node
	I0915 06:39:52.741163       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:39:52.741199       1 main.go:299] handling current node
	I0915 06:40:02.744269       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:40:02.744300       1 main.go:299] handling current node
	I0915 06:40:12.741561       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:40:12.741606       1 main.go:299] handling current node
	I0915 06:40:22.741439       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:40:22.741478       1 main.go:299] handling current node
	I0915 06:40:32.743376       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:40:32.743413       1 main.go:299] handling current node
	I0915 06:40:42.741358       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:40:42.741385       1 main.go:299] handling current node
	I0915 06:40:52.742166       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:40:52.742202       1 main.go:299] handling current node
	I0915 06:41:02.742805       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:02.742839       1 main.go:299] handling current node
	I0915 06:41:12.741669       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:12.741701       1 main.go:299] handling current node
	I0915 06:41:22.741861       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:22.741893       1 main.go:299] handling current node
	I0915 06:41:32.741392       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:32.741448       1 main.go:299] handling current node
	I0915 06:41:42.741427       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:42.741479       1 main.go:299] handling current node
	
	
	==> kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] <==
	W0915 06:32:23.167650       1 handler_proxy.go:99] no RequestInfo found in the context
	E0915 06:32:23.167718       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0915 06:32:23.177804       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0915 06:40:57.536268       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:55822: read: connection reset by peer
	I0915 06:41:00.195132       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 06:41:00.358287       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.223.215"}
	I0915 06:41:02.972541       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0915 06:41:31.847713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.847786       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.860136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.860240       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.861510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.861559       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.873408       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.873456       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.927412       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.927451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:41:32.862071       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:41:32.928023       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0915 06:41:33.025299       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 06:41:37.637716       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:41:38.658596       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] <==
	W0915 06:41:34.455284       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:34.455319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:36.186049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:36.186082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:36.265174       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:36.265212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:36.906720       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:36.906757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0915 06:41:38.659784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:39.596384       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:39.596427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:39.877655       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:39.877693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:40.810200       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:40.810239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:41.698477       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:41.698526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:41:42.057727       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:42.057770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:41:47.741020       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0915 06:41:48.041558       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:41:48.041612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:41:48.351240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.922µs"
	I0915 06:41:49.514640       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0915 06:41:49.514680       1 shared_informer.go:320] Caches are synced for resource quota
	
	
	==> kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] <==
	I0915 06:30:21.834006       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:30:23.436123       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:30:23.436244       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:30:23.828735       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:30:23.920347       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:30:24.020895       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:30:24.021810       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:30:24.021862       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:30:24.023838       1 config.go:199] "Starting service config controller"
	I0915 06:30:24.035431       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:30:24.037976       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:30:24.024321       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:30:24.038178       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:30:24.038213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:30:24.024295       1 config.go:328] "Starting node config controller"
	I0915 06:30:24.038343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:30:24.138804       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] <==
	E0915 06:30:12.440324       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:30:12.439968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 06:30:12.440368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 06:30:12.440396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:12.440004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:30:12.440436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:12.440042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:12.440462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.324810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:30:13.324857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.358300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:30:13.358343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.387669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:30:13.387710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.459534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:13.459576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.464687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:30:13.464727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.561227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:13.561268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.591583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:30:13.591620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.632014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:30:13.632056       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:30:16.638356       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:41:43 addons-022322 kubelet[1653]: I0915 06:41:43.232555    1653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8mq\" (UniqueName: \"kubernetes.io/projected/2a163c6c-fb90-4f42-8156-5f00dc9a2fa2-kube-api-access-kl8mq\") pod \"helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6\" (UID: \"2a163c6c-fb90-4f42-8156-5f00dc9a2fa2\") " pod="local-path-storage/helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6"
	Sep 15 06:41:44 addons-022322 kubelet[1653]: E0915 06:41:44.856916    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382504856713516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:539632,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:41:44 addons-022322 kubelet[1653]: E0915 06:41:44.856946    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382504856713516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:539632,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:41:47 addons-022322 kubelet[1653]: I0915 06:41:47.959154    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hvqq\" (UniqueName: \"kubernetes.io/projected/97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4-kube-api-access-5hvqq\") pod \"97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4\" (UID: \"97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4\") "
	Sep 15 06:41:47 addons-022322 kubelet[1653]: I0915 06:41:47.959209    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4-gcp-creds\") pod \"97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4\" (UID: \"97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4\") "
	Sep 15 06:41:47 addons-022322 kubelet[1653]: I0915 06:41:47.959335    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4" (UID: "97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 06:41:47 addons-022322 kubelet[1653]: I0915 06:41:47.960861    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4-kube-api-access-5hvqq" (OuterVolumeSpecName: "kube-api-access-5hvqq") pod "97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4" (UID: "97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4"). InnerVolumeSpecName "kube-api-access-5hvqq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.060391    1653 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5hvqq\" (UniqueName: \"kubernetes.io/projected/97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4-kube-api-access-5hvqq\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.060437    1653 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4-gcp-creds\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.654191    1653 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7x4t6" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.655470    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4" path="/var/lib/kubelet/pods/97ddeb4a-24a2-4c28-8bc7-cdc817e39fd4/volumes"
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.664192    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhzd6\" (UniqueName: \"kubernetes.io/projected/d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b-kube-api-access-dhzd6\") pod \"d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b\" (UID: \"d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b\") "
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.665976    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b-kube-api-access-dhzd6" (OuterVolumeSpecName: "kube-api-access-dhzd6") pod "d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b" (UID: "d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b"). InnerVolumeSpecName "kube-api-access-dhzd6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.765378    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw7mt\" (UniqueName: \"kubernetes.io/projected/97f7a0a8-94e9-42f2-8e49-9731910d0d64-kube-api-access-nw7mt\") pod \"97f7a0a8-94e9-42f2-8e49-9731910d0d64\" (UID: \"97f7a0a8-94e9-42f2-8e49-9731910d0d64\") "
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.765524    1653 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dhzd6\" (UniqueName: \"kubernetes.io/projected/d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b-kube-api-access-dhzd6\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.767145    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f7a0a8-94e9-42f2-8e49-9731910d0d64-kube-api-access-nw7mt" (OuterVolumeSpecName: "kube-api-access-nw7mt") pod "97f7a0a8-94e9-42f2-8e49-9731910d0d64" (UID: "97f7a0a8-94e9-42f2-8e49-9731910d0d64"). InnerVolumeSpecName "kube-api-access-nw7mt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:41:48 addons-022322 kubelet[1653]: I0915 06:41:48.866490    1653 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nw7mt\" (UniqueName: \"kubernetes.io/projected/97f7a0a8-94e9-42f2-8e49-9731910d0d64-kube-api-access-nw7mt\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:41:49 addons-022322 kubelet[1653]: I0915 06:41:49.498989    1653 scope.go:117] "RemoveContainer" containerID="b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8"
	Sep 15 06:41:49 addons-022322 kubelet[1653]: I0915 06:41:49.514749    1653 scope.go:117] "RemoveContainer" containerID="b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8"
	Sep 15 06:41:49 addons-022322 kubelet[1653]: E0915 06:41:49.515165    1653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8\": container with ID starting with b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8 not found: ID does not exist" containerID="b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8"
	Sep 15 06:41:49 addons-022322 kubelet[1653]: I0915 06:41:49.515212    1653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8"} err="failed to get container status \"b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8\": rpc error: code = NotFound desc = could not find container \"b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8\": container with ID starting with b3f6682bc1ef81c1526ca6a775d2b20510a132af4313ad6a48adffad601e2ae8 not found: ID does not exist"
	Sep 15 06:41:49 addons-022322 kubelet[1653]: I0915 06:41:49.515241    1653 scope.go:117] "RemoveContainer" containerID="9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e"
	Sep 15 06:41:49 addons-022322 kubelet[1653]: I0915 06:41:49.532378    1653 scope.go:117] "RemoveContainer" containerID="9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e"
	Sep 15 06:41:49 addons-022322 kubelet[1653]: E0915 06:41:49.532738    1653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e\": container with ID starting with 9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e not found: ID does not exist" containerID="9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e"
	Sep 15 06:41:49 addons-022322 kubelet[1653]: I0915 06:41:49.532783    1653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e"} err="failed to get container status \"9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e\": rpc error: code = NotFound desc = could not find container \"9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e\": container with ID starting with 9e10770b5ea7045dce9a98deceb7143c8feb642aa81881213f34fa505bc33a2e not found: ID does not exist"
	
	
	==> storage-provisioner [f16ac41ad768c5af72a289634ca7ed99edb67900cef177b81dd428a113bf6c28] <==
	I0915 06:31:03.471182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:31:03.479024       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:31:03.479069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:31:03.486210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:31:03.486362       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702!
	I0915 06:31:03.486750       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bf9f02c-8c94-46e0-beae-8c5e4ea3cb36", APIVersion:"v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702 became leader
	I0915 06:31:03.587216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-022322 -n addons-022322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-022322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-kktzj ingress-nginx-admission-patch-9qczt helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-022322 describe pod busybox test-local-path ingress-nginx-admission-create-kktzj ingress-nginx-admission-patch-9qczt helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-022322 describe pod busybox test-local-path ingress-nginx-admission-create-kktzj ingress-nginx-admission-patch-9qczt helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6: exit status 1 (70.816674ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-022322/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:32:34 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vj9bj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vj9bj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-022322
	  Normal   Pulling    7m45s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxctw (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xxctw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kktzj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9qczt" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-022322 describe pod busybox test-local-path ingress-nginx-admission-create-kktzj ingress-nginx-admission-patch-9qczt helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-022322 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-022322 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-022322 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f3416856-10e1-4e76-adb6-55c31a0baef7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f3416856-10e1-4e76-adb6-55c31a0baef7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003413758s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-022322 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.77499355s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-022322 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 addons disable ingress-dns --alsologtostderr -v=1: (1.051062158s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 addons disable ingress --alsologtostderr -v=1: (7.606941461s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-022322
helpers_test.go:235: (dbg) docker inspect addons-022322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982",
	        "Created": "2024-09-15T06:29:57.902403759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14686,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:29:58.035217085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/hostname",
	        "HostsPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/hosts",
	        "LogPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982-json.log",
	        "Name": "/addons-022322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-022322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-022322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2-init/diff:/var/lib/docker/overlay2/41629ade7f7315f2df14bde3ca812850a45d34be79d1a0e1cd0df4510f198eaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-022322",
	                "Source": "/var/lib/docker/volumes/addons-022322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-022322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-022322",
	                "name.minikube.sigs.k8s.io": "addons-022322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4341f423acc3b63be59cc1466a91768de2aedaeeb73f44de65907efa3e283439",
	            "SandboxKey": "/var/run/docker/netns/4341f423acc3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-022322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a799b0ec0fecd5a4bd23fbed4e9986ab3cc570dd08d36ddf5fd2808b6a2d36c8",
	                    "EndpointID": "55c8c593338908cf9c9befd1f38c515f233792dcedb45ab4037d822354db546e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-022322",
	                        "f987f02b7bf0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-022322 -n addons-022322
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 logs -n 25: (1.136944753s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-993247              | download-only-993247   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-319436              | download-only-319436   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-993247              | download-only-993247   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | download-docker-583228 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | download-docker-583228               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-583228            | download-docker-583228 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | binary-mirror-350163   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | binary-mirror-350163                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33455               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-350163              | binary-mirror-350163   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| addons  | enable dashboard -p                  | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| start   | -p addons-022322 --wait=true         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:32 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:40 UTC | 15 Sep 24 06:40 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:40 UTC | 15 Sep 24 06:40 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ssh     | addons-022322 ssh curl -s            | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| addons  | addons-022322 addons                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-022322 addons                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| ip      | addons-022322 ip                     | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-022322                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | -p addons-022322                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-022322 ip                     | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:34.409975   13892 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:34.410248   13892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.410258   13892 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:34.410265   13892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.410441   13892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:29:34.411031   13892 out.go:352] Setting JSON to false
	I0915 06:29:34.411877   13892 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":725,"bootTime":1726381049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:29:34.411966   13892 start.go:139] virtualization: kvm guest
	I0915 06:29:34.414135   13892 out.go:177] * [addons-022322] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:29:34.415403   13892 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:29:34.415427   13892 notify.go:220] Checking for updates...
	I0915 06:29:34.417886   13892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:34.419006   13892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:29:34.420065   13892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:29:34.421040   13892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:29:34.422082   13892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:29:34.423276   13892 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:34.444416   13892 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:34.444507   13892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.493618   13892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 06:29:34.484777495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:34.493719   13892 docker.go:318] overlay module found
	I0915 06:29:34.495531   13892 out.go:177] * Using the docker driver based on user configuration
	I0915 06:29:34.496714   13892 start.go:297] selected driver: docker
	I0915 06:29:34.496727   13892 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:34.496737   13892 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:29:34.497458   13892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.540933   13892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 06:29:34.532425836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:34.541099   13892 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:34.541411   13892 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:29:34.543067   13892 out.go:177] * Using Docker driver with root privileges
	I0915 06:29:34.544470   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:29:34.544531   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:29:34.544548   13892 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:29:34.544621   13892 start.go:340] cluster config:
	{Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:34.546120   13892 out.go:177] * Starting "addons-022322" primary control-plane node in "addons-022322" cluster
	I0915 06:29:34.547257   13892 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 06:29:34.548470   13892 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:29:34.549705   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:29:34.549737   13892 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:29:34.549743   13892 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:34.549740   13892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:29:34.549818   13892 preload.go:172] Found /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 06:29:34.549828   13892 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:29:34.550188   13892 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json ...
	I0915 06:29:34.550215   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json: {Name:mk75eadabcf88a1e80943e1d313c0ac3326c2ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:29:34.564904   13892 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:34.565023   13892 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:29:34.565042   13892 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:29:34.565047   13892 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:29:34.565054   13892 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:29:34.565061   13892 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:29:46.068469   13892 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:29:46.068505   13892 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:29:46.068552   13892 start.go:360] acquireMachinesLock for addons-022322: {Name:mk8cc43910e6fc14b57d745cb90cbe44d561ca46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:29:46.068638   13892 start.go:364] duration metric: took 67.597µs to acquireMachinesLock for "addons-022322"
	I0915 06:29:46.068659   13892 start.go:93] Provisioning new machine with config: &{Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:29:46.068733   13892 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:29:46.070467   13892 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:29:46.070716   13892 start.go:159] libmachine.API.Create for "addons-022322" (driver="docker")
	I0915 06:29:46.070750   13892 client.go:168] LocalClient.Create starting
	I0915 06:29:46.070843   13892 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem
	I0915 06:29:46.153955   13892 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem
	I0915 06:29:46.229474   13892 cli_runner.go:164] Run: docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:29:46.245025   13892 cli_runner.go:211] docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:29:46.245103   13892 network_create.go:284] running [docker network inspect addons-022322] to gather additional debugging logs...
	I0915 06:29:46.245124   13892 cli_runner.go:164] Run: docker network inspect addons-022322
	W0915 06:29:46.260140   13892 cli_runner.go:211] docker network inspect addons-022322 returned with exit code 1
	I0915 06:29:46.260172   13892 network_create.go:287] error running [docker network inspect addons-022322]: docker network inspect addons-022322: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-022322 not found
	I0915 06:29:46.260189   13892 network_create.go:289] output of [docker network inspect addons-022322]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-022322 not found
	
	** /stderr **
	I0915 06:29:46.260306   13892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:29:46.275634   13892 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000722ff0}
	I0915 06:29:46.275681   13892 network_create.go:124] attempt to create docker network addons-022322 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:29:46.275724   13892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-022322 addons-022322
	I0915 06:29:46.333701   13892 network_create.go:108] docker network addons-022322 192.168.49.0/24 created
	I0915 06:29:46.333733   13892 kic.go:121] calculated static IP "192.168.49.2" for the "addons-022322" container
	I0915 06:29:46.333805   13892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:29:46.348257   13892 cli_runner.go:164] Run: docker volume create addons-022322 --label name.minikube.sigs.k8s.io=addons-022322 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:29:46.364683   13892 oci.go:103] Successfully created a docker volume addons-022322
	I0915 06:29:46.364749   13892 cli_runner.go:164] Run: docker run --rm --name addons-022322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --entrypoint /usr/bin/test -v addons-022322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:29:53.558650   13892 cli_runner.go:217] Completed: docker run --rm --name addons-022322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --entrypoint /usr/bin/test -v addons-022322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (7.19385898s)
	I0915 06:29:53.558683   13892 oci.go:107] Successfully prepared a docker volume addons-022322
	I0915 06:29:53.558702   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:29:53.558719   13892 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:29:53.558765   13892 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-022322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:29:57.843175   13892 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-022322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.284379385s)
	I0915 06:29:57.843202   13892 kic.go:203] duration metric: took 4.284480255s to extract preloaded images to volume ...
	W0915 06:29:57.843320   13892 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:29:57.843484   13892 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:29:57.888235   13892 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-022322 --name addons-022322 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-022322 --network addons-022322 --ip 192.168.49.2 --volume addons-022322:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:29:58.195371   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Running}}
	I0915 06:29:58.213384   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.231552   13892 cli_runner.go:164] Run: docker exec addons-022322 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:29:58.274993   13892 oci.go:144] the created container "addons-022322" has a running status.
	I0915 06:29:58.275022   13892 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa...
	I0915 06:29:58.414826   13892 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:29:58.438897   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.455371   13892 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:29:58.455390   13892 kic_runner.go:114] Args: [docker exec --privileged addons-022322 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:29:58.500533   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.517370   13892 machine.go:93] provisionDockerMachine start ...
	I0915 06:29:58.517454   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:29:58.541070   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:29:58.541337   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:29:58.541359   13892 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:29:58.542136   13892 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45940->127.0.0.1:32768: read: connection reset by peer
	I0915 06:30:01.671607   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-022322
	
	I0915 06:30:01.671636   13892 ubuntu.go:169] provisioning hostname "addons-022322"
	I0915 06:30:01.671686   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:01.688450   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:01.688643   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:01.688659   13892 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-022322 && echo "addons-022322" | sudo tee /etc/hostname
	I0915 06:30:01.830097   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-022322
	
	I0915 06:30:01.830160   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:01.847238   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:01.847398   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:01.847416   13892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-022322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-022322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-022322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:30:01.976277   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:01.976304   13892 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-5979/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-5979/.minikube}
	I0915 06:30:01.976347   13892 ubuntu.go:177] setting up certificates
	I0915 06:30:01.976360   13892 provision.go:84] configureAuth start
	I0915 06:30:01.976418   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:01.992863   13892 provision.go:143] copyHostCerts
	I0915 06:30:01.992932   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/ca.pem (1082 bytes)
	I0915 06:30:01.993032   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/cert.pem (1123 bytes)
	I0915 06:30:01.993090   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/key.pem (1679 bytes)
	I0915 06:30:01.993138   13892 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem org=jenkins.addons-022322 san=[127.0.0.1 192.168.49.2 addons-022322 localhost minikube]
	I0915 06:30:02.152480   13892 provision.go:177] copyRemoteCerts
	I0915 06:30:02.152547   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:30:02.152581   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.169072   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.264370   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:30:02.285061   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:30:02.305376   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 06:30:02.325505   13892 provision.go:87] duration metric: took 349.132448ms to configureAuth
	I0915 06:30:02.325532   13892 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:30:02.325690   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:02.325794   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.342353   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:02.342515   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:02.342529   13892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:30:02.557166   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:30:02.557186   13892 machine.go:96] duration metric: took 4.039795692s to provisionDockerMachine
	I0915 06:30:02.557198   13892 client.go:171] duration metric: took 16.486440184s to LocalClient.Create
	I0915 06:30:02.557211   13892 start.go:167] duration metric: took 16.486496436s to libmachine.API.Create "addons-022322"
	I0915 06:30:02.557220   13892 start.go:293] postStartSetup for "addons-022322" (driver="docker")
	I0915 06:30:02.557232   13892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:30:02.557296   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:30:02.557345   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.573470   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.668798   13892 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:30:02.671706   13892 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:30:02.671735   13892 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:30:02.671743   13892 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:30:02.671751   13892 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:30:02.671763   13892 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-5979/.minikube/addons for local assets ...
	I0915 06:30:02.671828   13892 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-5979/.minikube/files for local assets ...
	I0915 06:30:02.671860   13892 start.go:296] duration metric: took 114.633114ms for postStartSetup
	I0915 06:30:02.672224   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:02.688735   13892 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json ...
	I0915 06:30:02.688986   13892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:30:02.689026   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.704764   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.792641   13892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:30:02.797055   13892 start.go:128] duration metric: took 16.728306999s to createHost
	I0915 06:30:02.797078   13892 start.go:83] releasing machines lock for "addons-022322", held for 16.728428922s
	I0915 06:30:02.797129   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:02.813813   13892 ssh_runner.go:195] Run: cat /version.json
	I0915 06:30:02.813860   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.813912   13892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:30:02.813966   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.831602   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.832784   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.923562   13892 ssh_runner.go:195] Run: systemctl --version
	I0915 06:30:02.995566   13892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:30:03.130869   13892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:30:03.134959   13892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:03.151986   13892 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:30:03.152064   13892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:03.177621   13892 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:30:03.177641   13892 start.go:495] detecting cgroup driver to use...
	I0915 06:30:03.177677   13892 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:30:03.177720   13892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:30:03.191256   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:30:03.200792   13892 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:30:03.200832   13892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:30:03.212398   13892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:30:03.224680   13892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:30:03.296606   13892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:30:03.380521   13892 docker.go:233] disabling docker service ...
	I0915 06:30:03.380577   13892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:30:03.397309   13892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:30:03.407246   13892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:30:03.479912   13892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:30:03.557251   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:30:03.567181   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:03.580975   13892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:30:03.581028   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.589417   13892 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:30:03.589475   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.597938   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.606431   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.614878   13892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:30:03.622833   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.630960   13892 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.644352   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.652628   13892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:30:03.659670   13892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:30:03.666698   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:03.739739   13892 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:30:03.813327   13892 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:30:03.813394   13892 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:30:03.816594   13892 start.go:563] Will wait 60s for crictl version
	I0915 06:30:03.816637   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:30:03.819439   13892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:30:03.850136   13892 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 06:30:03.850230   13892 ssh_runner.go:195] Run: crio --version
	I0915 06:30:03.884035   13892 ssh_runner.go:195] Run: crio --version
	I0915 06:30:03.917786   13892 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 06:30:03.918938   13892 cli_runner.go:164] Run: docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:30:03.934390   13892 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:30:03.937713   13892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:03.947346   13892 kubeadm.go:883] updating cluster {Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:30:03.947459   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:03.947520   13892 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:04.005083   13892 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:30:04.005102   13892 crio.go:433] Images already preloaded, skipping extraction
	I0915 06:30:04.005148   13892 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:04.035478   13892 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:30:04.035500   13892 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:30:04.035509   13892 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0915 06:30:04.035628   13892 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-022322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:30:04.035702   13892 ssh_runner.go:195] Run: crio config
	I0915 06:30:04.075458   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:30:04.075479   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:30:04.075490   13892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:30:04.075516   13892 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-022322 NodeName:addons-022322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:30:04.075684   13892 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-022322"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:30:04.075747   13892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:30:04.083565   13892 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:30:04.083629   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:30:04.091035   13892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 06:30:04.106246   13892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:30:04.121787   13892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0915 06:30:04.137021   13892 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:30:04.139971   13892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:04.149279   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:04.219995   13892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:04.231563   13892 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322 for IP: 192.168.49.2
	I0915 06:30:04.231583   13892 certs.go:194] generating shared ca certs ...
	I0915 06:30:04.231604   13892 certs.go:226] acquiring lock for ca certs: {Name:mkdad922548833f717724234d3dfea667af688cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.231715   13892 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key
	I0915 06:30:04.327854   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt ...
	I0915 06:30:04.327883   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt: {Name:mk88553ea6fe6b3bbcddbaf5fb4399b9d57d5f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.328061   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key ...
	I0915 06:30:04.328080   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key: {Name:mk24979239a9d34f46352c8e1b862a8e1f67ff74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.328180   13892 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key
	I0915 06:30:04.431987   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt ...
	I0915 06:30:04.432015   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt: {Name:mk51bec24258c7187bbcfbda02cab37b09aca3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.432183   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key ...
	I0915 06:30:04.432194   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key: {Name:mk16f3436fddecb64c7b08ccd6fc72cd1ef1fcbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.432279   13892 certs.go:256] generating profile certs ...
	I0915 06:30:04.432331   13892 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key
	I0915 06:30:04.432352   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt with IP's: []
	I0915 06:30:04.586803   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt ...
	I0915 06:30:04.586831   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: {Name:mked263498a55efc2d51dcfb8a63fb9ec85dbcce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.586983   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key ...
	I0915 06:30:04.586993   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key: {Name:mk512a1e1959bb23fe8a38640e6f78daabedd436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.587058   13892 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91
	I0915 06:30:04.587076   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:30:04.750681   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 ...
	I0915 06:30:04.750707   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91: {Name:mkee5aa0fd2cbaa659cee7dc8b42df64402edc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.750854   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91 ...
	I0915 06:30:04.750867   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91: {Name:mk1e30234ffaa908afe95a4568f6afb8dd531545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.750937   13892 certs.go:381] copying /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 -> /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt
	I0915 06:30:04.751005   13892 certs.go:385] copying /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91 -> /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key
	I0915 06:30:04.751050   13892 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key
	I0915 06:30:04.751065   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt with IP's: []
	I0915 06:30:04.940019   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt ...
	I0915 06:30:04.940043   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt: {Name:mk350f05c318062bf8390e5793e0bce85435f32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.940196   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key ...
	I0915 06:30:04.940224   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key: {Name:mk6d8d46803827bdaeae91eab214ce101c0c0420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.940408   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem (1679 bytes)
	I0915 06:30:04.940441   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:30:04.940467   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:30:04.940491   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem (1679 bytes)
	I0915 06:30:04.941035   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:30:04.963000   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:30:04.983402   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:30:05.003697   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:30:05.024132   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:30:05.043937   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:30:05.063970   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:30:05.084090   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:30:05.104158   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:30:05.125016   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:30:05.140478   13892 ssh_runner.go:195] Run: openssl version
	I0915 06:30:05.145206   13892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:30:05.153254   13892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.156142   13892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:30 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.156185   13892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.162089   13892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:30:05.169807   13892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:30:05.172461   13892 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:30:05.172500   13892 kubeadm.go:392] StartCluster: {Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:05.172563   13892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:30:05.172600   13892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:30:05.202825   13892 cri.go:89] found id: ""
	I0915 06:30:05.202888   13892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:30:05.210535   13892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:30:05.217839   13892 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:30:05.217879   13892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:30:05.225045   13892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:30:05.225061   13892 kubeadm.go:157] found existing configuration files:
	
	I0915 06:30:05.225099   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:30:05.232105   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:30:05.232161   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:30:05.238944   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:30:05.245833   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:30:05.245876   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:30:05.252619   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:30:05.259724   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:30:05.259769   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:30:05.266638   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:30:05.273591   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:30:05.273634   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:30:05.280379   13892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:30:05.310747   13892 kubeadm.go:310] W0915 06:30:05.310080    1295 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:05.311052   13892 kubeadm.go:310] W0915 06:30:05.310582    1295 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:05.327784   13892 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0915 06:30:05.372778   13892 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:30:15.409306   13892 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:30:15.409389   13892 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:30:15.409512   13892 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:30:15.409605   13892 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0915 06:30:15.409650   13892 kubeadm.go:310] OS: Linux
	I0915 06:30:15.409729   13892 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:30:15.409811   13892 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:30:15.409885   13892 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:30:15.409961   13892 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:30:15.410028   13892 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:30:15.410096   13892 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:30:15.410154   13892 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:30:15.410224   13892 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:30:15.410283   13892 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:30:15.410362   13892 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:30:15.410462   13892 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:30:15.410539   13892 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:30:15.410605   13892 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:30:15.412349   13892 out.go:235]   - Generating certificates and keys ...
	I0915 06:30:15.412446   13892 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:30:15.412504   13892 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:30:15.412593   13892 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:30:15.412685   13892 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:30:15.412743   13892 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:30:15.412790   13892 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:30:15.412843   13892 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:30:15.412979   13892 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-022322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:15.413045   13892 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:30:15.413211   13892 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-022322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:15.413278   13892 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:30:15.413348   13892 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:30:15.413417   13892 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:30:15.413497   13892 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:30:15.413543   13892 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:30:15.413596   13892 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:30:15.413651   13892 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:30:15.413711   13892 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:30:15.413763   13892 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:30:15.413833   13892 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:30:15.413920   13892 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:30:15.415294   13892 out.go:235]   - Booting up control plane ...
	I0915 06:30:15.415383   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:30:15.415472   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:30:15.415571   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:30:15.415674   13892 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:30:15.415751   13892 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:30:15.415785   13892 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:30:15.415945   13892 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:30:15.416086   13892 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:30:15.416138   13892 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00131336s
	I0915 06:30:15.416214   13892 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:30:15.416267   13892 kubeadm.go:310] [api-check] The API server is healthy after 4.0019115s
	I0915 06:30:15.416369   13892 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:30:15.416471   13892 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:30:15.416520   13892 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:30:15.416688   13892 kubeadm.go:310] [mark-control-plane] Marking the node addons-022322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:30:15.416769   13892 kubeadm.go:310] [bootstrap-token] Using token: qtz71d.xvu8oxfcrox05ula
	I0915 06:30:15.418849   13892 out.go:235]   - Configuring RBAC rules ...
	I0915 06:30:15.418964   13892 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:30:15.419059   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:30:15.419214   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:30:15.419359   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:30:15.419468   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:30:15.419543   13892 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:30:15.419648   13892 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:30:15.419706   13892 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:30:15.419754   13892 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:30:15.419760   13892 kubeadm.go:310] 
	I0915 06:30:15.419809   13892 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:30:15.419820   13892 kubeadm.go:310] 
	I0915 06:30:15.419907   13892 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:30:15.419917   13892 kubeadm.go:310] 
	I0915 06:30:15.419949   13892 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:30:15.420041   13892 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:30:15.420120   13892 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:30:15.420127   13892 kubeadm.go:310] 
	I0915 06:30:15.420230   13892 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:30:15.420239   13892 kubeadm.go:310] 
	I0915 06:30:15.420279   13892 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:30:15.420288   13892 kubeadm.go:310] 
	I0915 06:30:15.420336   13892 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:30:15.420404   13892 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:30:15.420486   13892 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:30:15.420494   13892 kubeadm.go:310] 
	I0915 06:30:15.420609   13892 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:30:15.420683   13892 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:30:15.420688   13892 kubeadm.go:310] 
	I0915 06:30:15.420761   13892 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qtz71d.xvu8oxfcrox05ula \
	I0915 06:30:15.420863   13892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b6fa81cefa24e7bb86a72fc94b64425479c808b0a0b074c57900fb8f22ced41 \
	I0915 06:30:15.420883   13892 kubeadm.go:310] 	--control-plane 
	I0915 06:30:15.420892   13892 kubeadm.go:310] 
	I0915 06:30:15.420975   13892 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:30:15.420984   13892 kubeadm.go:310] 
	I0915 06:30:15.421055   13892 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qtz71d.xvu8oxfcrox05ula \
	I0915 06:30:15.421162   13892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b6fa81cefa24e7bb86a72fc94b64425479c808b0a0b074c57900fb8f22ced41 
	I0915 06:30:15.421174   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:30:15.421186   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:30:15.422864   13892 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 06:30:15.424157   13892 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 06:30:15.427756   13892 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 06:30:15.427770   13892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 06:30:15.443978   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 06:30:15.630994   13892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:30:15.631066   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:15.631098   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-022322 minikube.k8s.io/updated_at=2024_09_15T06_30_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-022322 minikube.k8s.io/primary=true
	I0915 06:30:15.637726   13892 ops.go:34] apiserver oom_adj: -16
	I0915 06:30:15.740354   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:16.241041   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:16.740787   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:17.240556   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:17.741154   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:18.240693   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:18.740996   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:19.241363   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:19.740837   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:20.241069   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:20.301913   13892 kubeadm.go:1113] duration metric: took 4.670906624s to wait for elevateKubeSystemPrivileges
	I0915 06:30:20.301953   13892 kubeadm.go:394] duration metric: took 15.129453888s to StartCluster
	I0915 06:30:20.301974   13892 settings.go:142] acquiring lock: {Name:mk6128dee5a1f201e20204fc9647ceb1f8837444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:20.302067   13892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:30:20.302410   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/kubeconfig: {Name:mkb9d32ea81cbb0fb472b94a2fbc3394fd0d5468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:20.302584   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:30:20.302603   13892 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:30:20.302674   13892 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:30:20.302780   13892 addons.go:69] Setting yakd=true in profile "addons-022322"
	I0915 06:30:20.302797   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:20.302809   13892 addons.go:234] Setting addon yakd=true in "addons-022322"
	I0915 06:30:20.302800   13892 addons.go:69] Setting ingress=true in profile "addons-022322"
	I0915 06:30:20.302811   13892 addons.go:69] Setting registry=true in profile "addons-022322"
	I0915 06:30:20.302830   13892 addons.go:234] Setting addon ingress=true in "addons-022322"
	I0915 06:30:20.302841   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302846   13892 addons.go:234] Setting addon registry=true in "addons-022322"
	I0915 06:30:20.302853   13892 addons.go:69] Setting default-storageclass=true in profile "addons-022322"
	I0915 06:30:20.302869   13892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-022322"
	I0915 06:30:20.302882   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302893   13892 addons.go:69] Setting metrics-server=true in profile "addons-022322"
	I0915 06:30:20.302896   13892 addons.go:69] Setting storage-provisioner=true in profile "addons-022322"
	I0915 06:30:20.302910   13892 addons.go:234] Setting addon storage-provisioner=true in "addons-022322"
	I0915 06:30:20.302915   13892 addons.go:234] Setting addon metrics-server=true in "addons-022322"
	I0915 06:30:20.302906   13892 addons.go:69] Setting inspektor-gadget=true in profile "addons-022322"
	I0915 06:30:20.302941   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302944   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302959   13892 addons.go:234] Setting addon inspektor-gadget=true in "addons-022322"
	I0915 06:30:20.302986   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.303062   13892 addons.go:69] Setting gcp-auth=true in profile "addons-022322"
	I0915 06:30:20.303085   13892 mustload.go:65] Loading cluster: addons-022322
	I0915 06:30:20.303201   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303250   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:20.303362   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303410   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303410   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303453   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303460   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303468   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303767   13892 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-022322"
	I0915 06:30:20.303787   13892 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-022322"
	I0915 06:30:20.303811   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302882   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.304488   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309326   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309817   13892 addons.go:69] Setting helm-tiller=true in profile "addons-022322"
	I0915 06:30:20.309849   13892 addons.go:234] Setting addon helm-tiller=true in "addons-022322"
	I0915 06:30:20.309887   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.310907   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.331963   13892 addons.go:69] Setting volcano=true in profile "addons-022322"
	I0915 06:30:20.332020   13892 addons.go:234] Setting addon volcano=true in "addons-022322"
	I0915 06:30:20.332067   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.332190   13892 addons.go:69] Setting cloud-spanner=true in profile "addons-022322"
	I0915 06:30:20.332222   13892 addons.go:234] Setting addon cloud-spanner=true in "addons-022322"
	I0915 06:30:20.332251   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.332716   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.332771   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.302869   13892 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-022322"
	I0915 06:30:20.333031   13892 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-022322"
	I0915 06:30:20.333380   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.333586   13892 addons.go:69] Setting ingress-dns=true in profile "addons-022322"
	I0915 06:30:20.333604   13892 addons.go:234] Setting addon ingress-dns=true in "addons-022322"
	I0915 06:30:20.333652   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.334281   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.334862   13892 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-022322"
	I0915 06:30:20.334933   13892 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-022322"
	I0915 06:30:20.334982   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.335579   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309325   13892 out.go:177] * Verifying Kubernetes components...
	I0915 06:30:20.337960   13892 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:30:20.338463   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:20.338120   13892 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:30:20.338351   13892 addons.go:69] Setting volumesnapshots=true in profile "addons-022322"
	I0915 06:30:20.338628   13892 addons.go:234] Setting addon volumesnapshots=true in "addons-022322"
	I0915 06:30:20.339467   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.339891   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:30:20.339905   13892 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:30:20.339941   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.342092   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.342452   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:30:20.342525   13892 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:30:20.342607   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.342971   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.346120   13892 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:30:20.347336   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:30:20.348659   13892 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:30:20.348674   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:20.348704   13892 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:30:20.349046   13892 addons.go:234] Setting addon default-storageclass=true in "addons-022322"
	I0915 06:30:20.349207   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.349642   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.351436   13892 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:30:20.351456   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:30:20.351509   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.352633   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:20.354116   13892 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:20.354130   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:30:20.354167   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.357730   13892 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:20.357783   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:30:20.357860   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.358885   13892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:30:20.360491   13892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:20.360511   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:30:20.360581   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.366477   13892 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:30:20.367705   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:30:20.367726   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:30:20.367773   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.373892   13892 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:20.373916   13892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:30:20.373975   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.401143   13892 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-022322"
	I0915 06:30:20.401194   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.401670   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.404458   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:30:20.404531   13892 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:30:20.406526   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.412264   13892 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:30:20.412294   13892 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:30:20.412366   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.413394   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:30:20.414515   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W0915 06:30:20.415159   13892 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:30:20.416250   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.421239   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:30:20.425614   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:30:20.426998   13892 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:30:20.427153   13892 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:30:20.428255   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:30:20.428416   13892 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:20.428428   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:30:20.428481   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.428833   13892 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:20.428848   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:30:20.428892   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.430788   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.431752   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:30:20.431811   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:30:20.433923   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:30:20.433942   13892 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:30:20.433993   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.435738   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:30:20.437159   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:30:20.437177   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:30:20.437225   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.445942   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.448432   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.456319   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.457008   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.466588   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.470634   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.470670   13892 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:30:20.471780   13892 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:30:20.472972   13892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:20.472989   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:30:20.473040   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.475108   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:30:20.477999   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.481170   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.488919   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.489280   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.493975   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.729415   13892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:20.832138   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:30:20.832170   13892 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:30:20.842928   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:30:20.842956   13892 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:30:20.843447   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:20.845491   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:30:20.845517   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:30:20.935961   13892 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:30:20.935990   13892 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:30:21.020819   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:30:21.020845   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:30:21.022344   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:21.022633   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:21.028470   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:30:21.028540   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:30:21.036612   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:30:21.036638   13892 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:30:21.043861   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:21.044948   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:30:21.044984   13892 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:30:21.129298   13892 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:30:21.129392   13892 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:30:21.132074   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:21.136371   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:21.140305   13892 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:30:21.140374   13892 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:30:21.223515   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:30:21.223615   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:30:21.231836   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:30:21.231864   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:30:21.323974   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:21.323999   13892 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:30:21.324884   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:30:21.324911   13892 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:30:21.329210   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:30:21.335116   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:21.343630   13892 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:21.343660   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:30:21.423095   13892 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:30:21.423183   13892 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:30:21.439939   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:30:21.439989   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:30:21.521606   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:30:21.521696   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:30:21.537275   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:21.621192   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:30:21.621282   13892 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:30:21.724452   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:30:21.724539   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:30:21.737909   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:21.739858   13892 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:30:21.739880   13892 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:30:21.925913   13892 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.450763214s)
	I0915 06:30:21.925946   13892 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:30:21.927074   13892 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.197633331s)
	I0915 06:30:21.927844   13892 node_ready.go:35] waiting up to 6m0s for node "addons-022322" to be "Ready" ...
	I0915 06:30:21.938668   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:30:21.938695   13892 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:30:22.131212   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:22.131302   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:30:22.227350   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:30:22.227434   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:30:22.337579   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.494097126s)
	I0915 06:30:22.424841   13892 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:30:22.424937   13892 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:30:22.426572   13892 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:22.426594   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:30:22.441869   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:30:22.441902   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:30:22.625349   13892 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:30:22.625431   13892 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:30:22.625749   13892 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-022322" context rescaled to 1 replicas
	I0915 06:30:22.722472   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:22.737559   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:22.830732   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:30:22.830830   13892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:30:22.941338   13892 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:22.941417   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:30:23.037465   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:30:23.037557   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:30:23.131738   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:23.527823   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:30:23.527862   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:30:23.635288   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:23.635379   13892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:30:23.939243   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:23.941219   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:24.842837   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.820395677s)
	I0915 06:30:24.843012   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.820265268s)
	I0915 06:30:26.241838   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.19793849s)
	I0915 06:30:26.241872   13892 addons.go:475] Verifying addon ingress=true in "addons-022322"
	I0915 06:30:26.241927   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.109761671s)
	I0915 06:30:26.241965   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.105507724s)
	I0915 06:30:26.242074   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.912777866s)
	I0915 06:30:26.242143   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.906961401s)
	I0915 06:30:26.242274   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.704908207s)
	I0915 06:30:26.242305   13892 addons.go:475] Verifying addon metrics-server=true in "addons-022322"
	I0915 06:30:26.242321   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.504383825s)
	I0915 06:30:26.242338   13892 addons.go:475] Verifying addon registry=true in "addons-022322"
	I0915 06:30:26.242376   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.519818065s)
	I0915 06:30:26.243677   13892 out.go:177] * Verifying registry addon...
	I0915 06:30:26.243699   13892 out.go:177] * Verifying ingress addon...
	I0915 06:30:26.243677   13892 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-022322 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:30:26.245794   13892 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:30:26.246058   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:30:26.250360   13892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:30:26.250378   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:26.250570   13892 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:30:26.250588   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:26.430553   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:26.752630   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:26.753835   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:26.845459   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.107795434s)
	W0915 06:30:26.845502   13892 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:30:26.845528   13892 retry.go:31] will retry after 304.40675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:30:26.845567   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.713721026s)
	I0915 06:30:27.124607   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.18332755s)
	I0915 06:30:27.124648   13892 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-022322"
	I0915 06:30:27.126674   13892 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:30:27.128843   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:30:27.131216   13892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:30:27.131239   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:27.150966   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:27.248632   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:27.249242   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:27.566407   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:30:27.566474   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:27.584537   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:27.632194   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:27.750415   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:27.751081   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:27.841475   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:30:27.934260   13892 addons.go:234] Setting addon gcp-auth=true in "addons-022322"
	I0915 06:30:27.934313   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:27.934813   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:27.955612   13892 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:30:27.955667   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:27.970776   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:28.135033   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:28.249556   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:28.250273   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:28.430964   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:28.631563   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:28.748977   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:28.749552   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.132354   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:29.249177   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:29.249568   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.633236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:29.750088   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:29.750636   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.859251   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.708230768s)
	I0915 06:30:29.859418   13892 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.903782974s)
	I0915 06:30:29.861552   13892 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:30:29.863225   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:29.864891   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:30:29.864910   13892 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:30:29.925719   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:30:29.925740   13892 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:30:29.943867   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:30:29.943890   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:30:29.960393   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:30:30.132966   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:30.249143   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:30.249613   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:30.526051   13892 addons.go:475] Verifying addon gcp-auth=true in "addons-022322"
	I0915 06:30:30.527857   13892 out.go:177] * Verifying gcp-auth addon...
	I0915 06:30:30.530049   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:30:30.532704   13892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:30:30.532727   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:30.633796   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:30.749512   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:30.749926   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:30.930726   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:31.032992   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:31.132430   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:31.248998   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:31.249582   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:31.532095   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:31.631866   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:31.749423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:31.749735   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.033310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:32.131692   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:32.248944   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:32.249409   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.532440   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:32.632069   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:32.749426   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:32.749899   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.930811   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:33.033142   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:33.131445   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:33.249273   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:33.249696   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:33.533493   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:33.632131   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:33.749349   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:33.749683   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.033541   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:34.131638   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:34.249215   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:34.249571   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.533324   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:34.631916   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:34.749515   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:34.749960   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.931178   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:35.033423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:35.131815   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:35.249166   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:35.249432   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:35.532510   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:35.631903   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:35.749413   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:35.749752   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:36.032982   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:36.132490   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:36.248776   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:36.249119   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:36.533499   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:36.631988   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:36.749385   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:36.749758   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:37.033350   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:37.131770   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:37.249247   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:37.249628   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:37.430856   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:37.532843   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:37.632359   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:37.748704   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:37.749002   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:38.032752   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:38.132301   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:38.248619   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:38.249266   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:38.533360   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:38.631718   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:38.749031   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:38.749371   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:39.033571   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:39.132181   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:39.248407   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:39.248863   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:39.431113   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:39.533483   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:39.631970   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:39.749127   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:39.749498   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:40.032583   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:40.131976   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:40.249304   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:40.249738   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:40.533163   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:40.631473   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:40.748891   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:40.749468   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.032705   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:41.132285   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:41.248530   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:41.249032   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.533199   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:41.631596   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:41.748844   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:41.749922   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.931608   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:42.033113   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:42.131418   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:42.248812   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:42.249143   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:42.533306   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:42.631764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:42.748932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:42.749371   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:43.032478   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:43.131853   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:43.249088   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:43.249728   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:43.532884   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:43.632642   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:43.748599   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:43.749065   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:44.033602   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:44.132171   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:44.249344   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:44.249835   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:44.433599   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:44.532662   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:44.632181   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:44.748443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:44.748785   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:45.033368   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:45.131859   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:45.249263   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:45.249709   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:45.533096   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:45.631376   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:45.748955   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:45.749258   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.033511   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:46.132347   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:46.248739   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:46.249160   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.532647   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:46.632424   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:46.748779   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:46.749373   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.931183   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:47.033472   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:47.131786   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:47.249291   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:47.249573   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:47.533062   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:47.631443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:47.749019   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:47.749416   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:48.032697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:48.132659   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:48.249020   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:48.249401   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:48.532863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:48.632443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:48.748984   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:48.749413   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:49.032778   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:49.132449   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:49.248740   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:49.249158   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:49.430379   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:49.532894   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:49.632308   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:49.748689   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:49.749158   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:50.033151   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:50.131571   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:50.249014   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:50.249328   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:50.532829   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:50.632333   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:50.748757   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:50.749169   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:51.033369   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:51.131932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:51.249267   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:51.249658   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:51.430918   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:51.533471   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:51.632010   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:51.749072   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:51.749695   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:52.033468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:52.131895   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:52.249214   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:52.249830   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:52.533324   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:52.631661   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:52.749011   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:52.749470   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:53.033460   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:53.131849   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:53.249377   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:53.249709   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:53.431009   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:53.533596   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:53.632155   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:53.748462   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:53.748914   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:54.033214   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:54.131618   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:54.249008   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:54.249448   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:54.533042   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:54.632633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:54.748999   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:54.749588   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:55.033799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:55.132232   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:55.248600   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:55.248972   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:55.431132   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:55.533498   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:55.632249   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:55.748409   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:55.748799   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:56.033232   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:56.131633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:56.249087   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:56.249443   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:56.532853   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:56.632090   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:56.748878   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:56.748892   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:57.032670   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:57.132402   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:57.248887   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:57.249314   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:57.431495   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:57.532764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:57.632398   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:57.748750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:57.749249   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:58.032988   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:58.132605   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:58.248826   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:58.249443   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:58.533246   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:58.632466   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:58.748323   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:58.748971   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.033150   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:59.131282   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:59.248607   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:59.249030   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.533380   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:59.631811   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:59.749264   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:59.749909   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.930808   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:31:00.033110   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:00.131575   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:00.248601   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:00.248948   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:00.533625   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:00.632215   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:00.748540   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:00.749110   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:01.033691   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:01.132060   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:01.249399   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:01.249913   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:01.533411   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:01.631698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:01.749129   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:01.749394   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.032821   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:02.132265   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:02.248609   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:02.249248   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.431210   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:31:02.533582   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:02.632031   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:02.749318   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:02.749753   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.938686   13892 node_ready.go:49] node "addons-022322" has status "Ready":"True"
	I0915 06:31:02.938772   13892 node_ready.go:38] duration metric: took 41.010898206s for node "addons-022322" to be "Ready" ...
	I0915 06:31:02.938800   13892 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:02.947092   13892 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.037453   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:03.134905   13892 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:31:03.134932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:03.249093   13892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:31:03.249112   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:03.249662   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:03.534546   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:03.636557   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:03.751133   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:03.751759   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:03.952699   13892 pod_ready.go:93] pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.952725   13892 pod_ready.go:82] duration metric: took 1.005603448s for pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.952743   13892 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.956791   13892 pod_ready.go:93] pod "etcd-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.956833   13892 pod_ready.go:82] duration metric: took 4.073042ms for pod "etcd-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.956850   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.960877   13892 pod_ready.go:93] pod "kube-apiserver-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.960900   13892 pod_ready.go:82] duration metric: took 4.034597ms for pod "kube-apiserver-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.960911   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.965260   13892 pod_ready.go:93] pod "kube-controller-manager-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.965283   13892 pod_ready.go:82] duration metric: took 4.363575ms for pod "kube-controller-manager-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.965299   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gw7ff" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.033697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:04.132473   13892 pod_ready.go:93] pod "kube-proxy-gw7ff" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:04.132554   13892 pod_ready.go:82] duration metric: took 167.246699ms for pod "kube-proxy-gw7ff" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.132578   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.136244   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:04.251490   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:04.252243   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:04.533023   13892 pod_ready.go:93] pod "kube-scheduler-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:04.533103   13892 pod_ready.go:82] duration metric: took 400.506171ms for pod "kube-scheduler-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.533131   13892 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.533863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:04.634658   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:04.749985   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:04.750620   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.033858   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:05.133473   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:05.249607   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:05.250016   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.533512   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:05.633522   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:05.749567   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:05.750619   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.033337   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:06.132883   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.251011   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.251171   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.533695   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:06.537858   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:06.633310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.749666   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.750659   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.033710   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:07.133859   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.250107   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.250514   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.533553   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:07.633929   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.749698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.750015   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.033127   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:08.132358   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.249375   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.250351   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.533052   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:08.538331   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:08.632846   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.750600   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.751091   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.033893   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:09.133772   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.249846   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.250485   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.533541   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:09.634329   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.749468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.749927   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.032951   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:10.133703   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.249374   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:10.250142   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.533264   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:10.634824   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.749724   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:10.749950   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.033288   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:11.038713   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:11.133046   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.249103   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.249357   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.533301   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:11.632698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.749784   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.750069   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.033157   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:12.132818   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.249697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.250174   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.533250   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:12.633141   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.749453   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.749779   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.033165   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:13.132738   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.249754   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.250133   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.533097   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:13.537943   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:13.635262   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.749235   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.749608   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.033344   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:14.134224   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.250178   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.250386   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.532745   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:14.632274   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.749463   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.749574   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.032578   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:15.132543   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.249733   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.250131   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.533283   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:15.635694   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.749500   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.749903   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.033326   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:16.037154   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:16.132492   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.249928   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.250220   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.533621   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:16.633765   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.749606   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.750083   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.033424   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:17.133632   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.249099   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.249293   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.533944   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:17.635728   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.749747   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.749845   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.033242   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:18.133749   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.248979   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.249435   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.533485   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:18.537953   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:18.634427   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.749507   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.750729   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.033132   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:19.133614   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.250070   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.250669   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.533209   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:19.634429   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.749576   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.750000   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.033510   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:20.133879   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.250067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.250469   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.533633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:20.633286   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.749441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.749850   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.032951   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:21.037580   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:21.133010   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.249096   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.249327   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.533841   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:21.636703   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.750045   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.750258   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.033777   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:22.133441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.250313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.250819   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.533952   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:22.632273   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.749762   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.750018   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.033083   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:23.037994   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:23.133419   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.249942   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.250259   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.533730   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:23.633468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.749343   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.749675   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.034567   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:24.133677   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.249854   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.250284   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.533692   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:24.635572   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.749613   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.749916   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.033066   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:25.038206   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:25.132536   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.249706   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.250366   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.533750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:25.633778   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.750162   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.750492   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.032739   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:26.133178   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.249808   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.250389   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.533398   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:26.632980   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.749044   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.749242   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.033678   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:27.132456   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.249550   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.249778   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.532989   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:27.537774   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:27.632926   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.749383   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.749640   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.033168   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:28.132791   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.249100   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.249491   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.533927   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:28.633791   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.750246   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.750586   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.034176   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:29.134799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.326913   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.328515   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.533911   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:29.538178   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:29.634297   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.750998   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.751378   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.033198   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.133588   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.249814   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.250074   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.533173   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.634738   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.749679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.750305   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.033423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.133414   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.250044   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.251160   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.533304   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.633864   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.750141   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.750451   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.033133   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.037779   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:32.136313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.249954   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.250075   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.533300   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.633419   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.749736   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.749765   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.034007   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.133723   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.251986   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.252651   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.533521   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.632441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.749489   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.750028   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.033420   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.133332   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.249806   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.250249   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.534059   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.537695   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:34.633237   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.749972   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.750523   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.033433   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.134668   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.249067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.249280   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.533868   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.633700   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.751799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.752239   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.033863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.135788   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.261209   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.261484   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.534169   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.538356   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:36.635005   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.749444   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.749741   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.033143   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.134759   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.249201   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.249293   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.533999   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.633966   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.749679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.750282   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.034292   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.135654   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.248750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.249021   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.533563   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.538901   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:38.634050   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.750025   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.750354   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.033208   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.134881   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.250167   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.250578   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.533950   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.633617   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.749971   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.750223   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.033298   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.134948   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.249689   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.249968   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.533359   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.633818   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.749314   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.750010   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.033236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.037513   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:41.132679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.249029   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.249263   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.533936   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.633190   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.749449   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.749911   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.033106   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.133817   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.249836   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.250431   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.535637   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.633862   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.749067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.749419   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.033542   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.038254   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:43.132986   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.249533   13892 kapi.go:107] duration metric: took 1m17.003470316s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:31:43.249679   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.533132   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.635084   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.824289   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.034118   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.135800   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.250034   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.533788   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.634382   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.825384   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.035081   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.041001   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:45.134128   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.324267   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.532799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.634388   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.750074   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.033800   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.133411   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.249977   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.533385   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.633892   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.749200   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.033644   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.133340   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.254798   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.534822   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.538268   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:47.633121   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.750145   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.034050   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.133341   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.249584   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.534071   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.633605   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.749704   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.033188   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.134519   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.250183   13892 kapi.go:107] duration metric: took 1m23.00438592s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:31:49.533890   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.538762   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:49.635540   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.033558   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.134427   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.533564   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.633920   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.033803   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.133735   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.533829   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.632841   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.033313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.038094   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:52.133649   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.533764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.633086   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.033466   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.134242   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.533335   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.632408   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.033715   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.133140   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.533484   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.538357   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:54.633319   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.033334   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.135308   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.534278   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.632743   13892 kapi.go:107] duration metric: took 1m28.503900328s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:31:56.033022   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.533339   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.033408   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.037428   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:57.533745   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.033869   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.561194   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.033310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.037527   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:59.533635   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.033679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.533809   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.033525   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.532938   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.538141   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:02.033393   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.533588   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.033570   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.534054   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.538193   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:04.033637   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.533236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.033082   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.533172   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.033825   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.037689   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:06.533490   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.033488   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.533224   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.033746   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.038349   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:08.532934   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.035261   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.533246   13892 kapi.go:107] duration metric: took 1m39.003196071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:32:09.535024   13892 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-022322 cluster.
	I0915 06:32:09.536557   13892 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:32:09.537938   13892 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:32:09.539455   13892 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, helm-tiller, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0915 06:32:09.540834   13892 addons.go:510] duration metric: took 1m49.238162954s for enable addons: enabled=[default-storageclass storage-provisioner ingress-dns nvidia-device-plugin helm-tiller cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0915 06:32:10.055748   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:12.538990   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:15.038859   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:17.539022   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:20.038101   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:22.038820   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:23.537933   13892 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"True"
	I0915 06:32:23.537954   13892 pod_ready.go:82] duration metric: took 1m19.004805064s for pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.537962   13892 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.541840   13892 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace has status "Ready":"True"
	I0915 06:32:23.541860   13892 pod_ready.go:82] duration metric: took 3.891408ms for pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.541876   13892 pod_ready.go:39] duration metric: took 1m20.602996157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:32:23.541894   13892 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:32:23.541935   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:23.541985   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:23.576334   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:23.576356   13892 cri.go:89] found id: ""
	I0915 06:32:23.576365   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:23.576422   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.579515   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:23.579565   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:23.612826   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:23.612848   13892 cri.go:89] found id: ""
	I0915 06:32:23.612859   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:23.612912   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.615937   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:23.616004   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:23.648343   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:23.648362   13892 cri.go:89] found id: ""
	I0915 06:32:23.648370   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:23.648421   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.651502   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:23.651550   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:23.683263   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:23.683283   13892 cri.go:89] found id: ""
	I0915 06:32:23.683291   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:23.683342   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.686441   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:23.686492   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:23.718280   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:23.718303   13892 cri.go:89] found id: ""
	I0915 06:32:23.718311   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:23.718362   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.721633   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:23.721680   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:23.752697   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:23.752714   13892 cri.go:89] found id: ""
	I0915 06:32:23.752721   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:23.752768   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.755879   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:23.755942   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:23.787801   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:23.787820   13892 cri.go:89] found id: ""
	I0915 06:32:23.787826   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:23.787876   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.791129   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:23.791151   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:23.867026   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:23.867061   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:23.901983   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:23.902011   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:23.935110   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:23.935141   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:23.988900   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:23.988938   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:24.031371   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:24.031405   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:24.081347   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:24.081384   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:24.122044   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:24.122095   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:24.155921   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:24.155948   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:24.196166   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:24.196216   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:24.263412   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:24.263447   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:24.275361   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:24.275390   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:26.871834   13892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:32:26.884976   13892 api_server.go:72] duration metric: took 2m6.582339744s to wait for apiserver process to appear ...
	I0915 06:32:26.885002   13892 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:32:26.885037   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:26.885094   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:26.916059   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:26.916084   13892 cri.go:89] found id: ""
	I0915 06:32:26.916094   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:26.916150   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.919091   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:26.919141   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:26.950001   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:26.950025   13892 cri.go:89] found id: ""
	I0915 06:32:26.950041   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:26.950092   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.953219   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:26.953681   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:26.986623   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:26.986647   13892 cri.go:89] found id: ""
	I0915 06:32:26.986653   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:26.986697   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.989805   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:26.989862   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:27.020895   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:27.020916   13892 cri.go:89] found id: ""
	I0915 06:32:27.020923   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:27.020964   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.023987   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:27.024043   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:27.055667   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:27.055687   13892 cri.go:89] found id: ""
	I0915 06:32:27.055695   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:27.055736   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.058824   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:27.058872   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:27.090021   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:27.090042   13892 cri.go:89] found id: ""
	I0915 06:32:27.090049   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:27.090092   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.093202   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:27.093251   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:27.125406   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:27.125425   13892 cri.go:89] found id: ""
	I0915 06:32:27.125431   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:27.125470   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.128687   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:27.128708   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:27.221426   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:27.221463   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:27.264237   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:27.264271   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:27.310366   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:27.310397   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:27.343769   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:27.343796   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:27.374824   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:27.374856   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:27.430978   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:27.431014   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:27.466156   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:27.466183   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:27.534355   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:27.534389   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:27.572880   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:27.572907   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:27.650217   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:27.650248   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:27.689764   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:27.689790   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:30.201718   13892 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:32:30.205361   13892 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:32:30.206248   13892 api_server.go:141] control plane version: v1.31.1
	I0915 06:32:30.206274   13892 api_server.go:131] duration metric: took 3.321265546s to wait for apiserver health ...
	I0915 06:32:30.206281   13892 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:32:30.206300   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:30.206346   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:30.247576   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:30.247601   13892 cri.go:89] found id: ""
	I0915 06:32:30.247616   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:30.247665   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.251237   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:30.251299   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:30.337514   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:30.337535   13892 cri.go:89] found id: ""
	I0915 06:32:30.337542   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:30.337580   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.340694   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:30.340761   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:30.374248   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:30.374270   13892 cri.go:89] found id: ""
	I0915 06:32:30.374277   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:30.374315   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.377794   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:30.377865   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:30.447654   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:30.447678   13892 cri.go:89] found id: ""
	I0915 06:32:30.447687   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:30.447735   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.450965   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:30.451014   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:30.528575   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:30.528594   13892 cri.go:89] found id: ""
	I0915 06:32:30.528601   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:30.528652   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.532059   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:30.532122   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:30.566547   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:30.566565   13892 cri.go:89] found id: ""
	I0915 06:32:30.566572   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:30.566612   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.569834   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:30.569904   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:30.603072   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:30.603098   13892 cri.go:89] found id: ""
	I0915 06:32:30.603109   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:30.603155   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.606231   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:30.606251   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:30.617438   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:30.617461   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:30.726726   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:30.726754   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:30.759609   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:30.759631   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:30.814163   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:30.814196   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:30.848586   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:30.848611   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:30.889221   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:30.889248   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:30.955679   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:30.955711   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:31.010974   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:31.011012   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:31.062696   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:31.062727   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:31.097720   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:31.097751   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:31.139225   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:31.139253   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:33.738096   13892 system_pods.go:59] 19 kube-system pods found
	I0915 06:32:33.738132   13892 system_pods.go:61] "coredns-7c65d6cfc9-xrtf5" [3d071306-6186-47d8-a38c-c09d0565172e] Running
	I0915 06:32:33.738138   13892 system_pods.go:61] "csi-hostpath-attacher-0" [b9779b21-66d4-497b-95ca-d4e3bb1f440d] Running
	I0915 06:32:33.738143   13892 system_pods.go:61] "csi-hostpath-resizer-0" [d1de7650-462e-48b8-a7c4-d41806ea999d] Running
	I0915 06:32:33.738146   13892 system_pods.go:61] "csi-hostpathplugin-r87k9" [55f95c6b-c8ef-44a8-8502-9101b3c1a6bc] Running
	I0915 06:32:33.738149   13892 system_pods.go:61] "etcd-addons-022322" [47de0033-c753-46fa-8a91-f22a259be595] Running
	I0915 06:32:33.738153   13892 system_pods.go:61] "kindnet-wj66m" [54288115-3d96-4604-8d43-05eb4463ffa4] Running
	I0915 06:32:33.738156   13892 system_pods.go:61] "kube-apiserver-addons-022322" [6deaca10-4203-4248-8a4f-6d69cd208f8d] Running
	I0915 06:32:33.738159   13892 system_pods.go:61] "kube-controller-manager-addons-022322" [91941bbe-e2ca-4927-8822-171a063ffbe7] Running
	I0915 06:32:33.738162   13892 system_pods.go:61] "kube-ingress-dns-minikube" [5079ffa6-3a78-4f89-b9b1-96c20fca6fb6] Running
	I0915 06:32:33.738166   13892 system_pods.go:61] "kube-proxy-gw7ff" [e4cb2a76-ff95-4461-9c14-70ee381b42b0] Running
	I0915 06:32:33.738169   13892 system_pods.go:61] "kube-scheduler-addons-022322" [6afa8b86-1784-40cf-a887-1e69ffa32f03] Running
	I0915 06:32:33.738172   13892 system_pods.go:61] "metrics-server-84c5f94fbc-gv786" [f7898557-9596-4239-9fab-1fce4db35921] Running
	I0915 06:32:33.738175   13892 system_pods.go:61] "nvidia-device-plugin-daemonset-7x4t6" [549d014b-a13d-466e-8959-d22764717045] Running
	I0915 06:32:33.738179   13892 system_pods.go:61] "registry-66c9cd494c-q5ztn" [d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b] Running
	I0915 06:32:33.738182   13892 system_pods.go:61] "registry-proxy-v7tht" [97f7a0a8-94e9-42f2-8e49-9731910d0d64] Running
	I0915 06:32:33.738185   13892 system_pods.go:61] "snapshot-controller-56fcc65765-h6nwh" [4b24f9d0-a988-4767-96ad-bf7e26d377ef] Running
	I0915 06:32:33.738188   13892 system_pods.go:61] "snapshot-controller-56fcc65765-kndfm" [402c59b1-bcf6-4b08-9646-8a21aed37020] Running
	I0915 06:32:33.738191   13892 system_pods.go:61] "storage-provisioner" [10257ad9-5003-4e70-ab68-778fc1738cc4] Running
	I0915 06:32:33.738193   13892 system_pods.go:61] "tiller-deploy-b48cc5f79-tpczq" [e9d5480f-8c59-4ab5-b5fc-a6fcd1801c51] Running
	I0915 06:32:33.738198   13892 system_pods.go:74] duration metric: took 3.531911981s to wait for pod list to return data ...
	I0915 06:32:33.738204   13892 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:32:33.740398   13892 default_sa.go:45] found service account: "default"
	I0915 06:32:33.740416   13892 default_sa.go:55] duration metric: took 2.207623ms for default service account to be created ...
	I0915 06:32:33.740424   13892 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:32:33.748862   13892 system_pods.go:86] 19 kube-system pods found
	I0915 06:32:33.748886   13892 system_pods.go:89] "coredns-7c65d6cfc9-xrtf5" [3d071306-6186-47d8-a38c-c09d0565172e] Running
	I0915 06:32:33.748892   13892 system_pods.go:89] "csi-hostpath-attacher-0" [b9779b21-66d4-497b-95ca-d4e3bb1f440d] Running
	I0915 06:32:33.748896   13892 system_pods.go:89] "csi-hostpath-resizer-0" [d1de7650-462e-48b8-a7c4-d41806ea999d] Running
	I0915 06:32:33.748900   13892 system_pods.go:89] "csi-hostpathplugin-r87k9" [55f95c6b-c8ef-44a8-8502-9101b3c1a6bc] Running
	I0915 06:32:33.748903   13892 system_pods.go:89] "etcd-addons-022322" [47de0033-c753-46fa-8a91-f22a259be595] Running
	I0915 06:32:33.748907   13892 system_pods.go:89] "kindnet-wj66m" [54288115-3d96-4604-8d43-05eb4463ffa4] Running
	I0915 06:32:33.748912   13892 system_pods.go:89] "kube-apiserver-addons-022322" [6deaca10-4203-4248-8a4f-6d69cd208f8d] Running
	I0915 06:32:33.748915   13892 system_pods.go:89] "kube-controller-manager-addons-022322" [91941bbe-e2ca-4927-8822-171a063ffbe7] Running
	I0915 06:32:33.748919   13892 system_pods.go:89] "kube-ingress-dns-minikube" [5079ffa6-3a78-4f89-b9b1-96c20fca6fb6] Running
	I0915 06:32:33.748922   13892 system_pods.go:89] "kube-proxy-gw7ff" [e4cb2a76-ff95-4461-9c14-70ee381b42b0] Running
	I0915 06:32:33.748927   13892 system_pods.go:89] "kube-scheduler-addons-022322" [6afa8b86-1784-40cf-a887-1e69ffa32f03] Running
	I0915 06:32:33.748935   13892 system_pods.go:89] "metrics-server-84c5f94fbc-gv786" [f7898557-9596-4239-9fab-1fce4db35921] Running
	I0915 06:32:33.748939   13892 system_pods.go:89] "nvidia-device-plugin-daemonset-7x4t6" [549d014b-a13d-466e-8959-d22764717045] Running
	I0915 06:32:33.748946   13892 system_pods.go:89] "registry-66c9cd494c-q5ztn" [d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b] Running
	I0915 06:32:33.748949   13892 system_pods.go:89] "registry-proxy-v7tht" [97f7a0a8-94e9-42f2-8e49-9731910d0d64] Running
	I0915 06:32:33.748960   13892 system_pods.go:89] "snapshot-controller-56fcc65765-h6nwh" [4b24f9d0-a988-4767-96ad-bf7e26d377ef] Running
	I0915 06:32:33.748965   13892 system_pods.go:89] "snapshot-controller-56fcc65765-kndfm" [402c59b1-bcf6-4b08-9646-8a21aed37020] Running
	I0915 06:32:33.748970   13892 system_pods.go:89] "storage-provisioner" [10257ad9-5003-4e70-ab68-778fc1738cc4] Running
	I0915 06:32:33.748974   13892 system_pods.go:89] "tiller-deploy-b48cc5f79-tpczq" [e9d5480f-8c59-4ab5-b5fc-a6fcd1801c51] Running
	I0915 06:32:33.748983   13892 system_pods.go:126] duration metric: took 8.554163ms to wait for k8s-apps to be running ...
	I0915 06:32:33.748991   13892 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:32:33.749033   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:32:33.759914   13892 system_svc.go:56] duration metric: took 10.915717ms WaitForService to wait for kubelet
	I0915 06:32:33.759944   13892 kubeadm.go:582] duration metric: took 2m13.45731059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:32:33.759970   13892 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:32:33.762677   13892 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0915 06:32:33.762700   13892 node_conditions.go:123] node cpu capacity is 8
	I0915 06:32:33.762712   13892 node_conditions.go:105] duration metric: took 2.737031ms to run NodePressure ...
	I0915 06:32:33.762722   13892 start.go:241] waiting for startup goroutines ...
	I0915 06:32:33.762728   13892 start.go:246] waiting for cluster config update ...
	I0915 06:32:33.762743   13892 start.go:255] writing updated cluster config ...
	I0915 06:32:33.762994   13892 ssh_runner.go:195] Run: rm -f paused
	I0915 06:32:33.810544   13892 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:32:33.812783   13892 out.go:177] * Done! kubectl is now configured to use "addons-022322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:43:22 addons-022322 conmon[3683]: conmon b9b5e44789caa10617a3 <ninfo>: container 3695 exited with status 1
	Sep 15 06:43:22 addons-022322 crio[1033]: time="2024-09-15 06:43:22.258965541Z" level=info msg="Stopped container b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=0b3554ce-1c49-44dd-9234-d4ae18e16882 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:43:22 addons-022322 crio[1033]: time="2024-09-15 06:43:22.259489050Z" level=info msg="Stopping pod sandbox: 9297294de83bfb535bce6f4cbe424d5436a6cb42383db64a01fcdacd08437723" id=224a7aa3-c175-45f3-bbd0-c68a44ad17ea name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:43:22 addons-022322 crio[1033]: time="2024-09-15 06:43:22.260075945Z" level=info msg="Stopped pod sandbox: 9297294de83bfb535bce6f4cbe424d5436a6cb42383db64a01fcdacd08437723" id=224a7aa3-c175-45f3-bbd0-c68a44ad17ea name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:43:22 addons-022322 crio[1033]: time="2024-09-15 06:43:22.694786155Z" level=info msg="Removing container: b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6" id=808fdbe3-2673-4b14-a3d3-b4a5b6792d25 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:43:22 addons-022322 crio[1033]: time="2024-09-15 06:43:22.708013736Z" level=info msg="Removed container b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=808fdbe3-2673-4b14-a3d3-b4a5b6792d25 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:43:24 addons-022322 crio[1033]: time="2024-09-15 06:43:24.235569759Z" level=info msg="Stopping container: 7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91 (timeout: 2s)" id=787e8335-2fb9-4aa8-8745-3fa3b98cb618 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.241943865Z" level=warning msg="Stopping container 7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=787e8335-2fb9-4aa8-8745-3fa3b98cb618 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:43:26 addons-022322 conmon[5729]: conmon 7b86d41c025509e7948f <ninfo>: container 5741 exited with status 137
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.345347833Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=33dcaa65-193e-4f35-8c27-08ab08739e8f name=/runtime.v1.ImageService/PullImage
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.346571147Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.374223081Z" level=info msg="Stopped container 7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91: ingress-nginx/ingress-nginx-controller-bc57996ff-rbq4t/controller" id=787e8335-2fb9-4aa8-8745-3fa3b98cb618 name=/runtime.v1.RuntimeService/StopContainer
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.374724383Z" level=info msg="Stopping pod sandbox: ff90f27733177a87e2aeee73adc7964d1034062a7f8adac535bb30d37e7727b4" id=cfc4cedd-d7f5-455e-b32c-ebf795a7c035 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.377585678Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-IC4TX6RUQ2PDL433 - [0:0]\n:KUBE-HP-WZN244WMAC2SFCZJ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-IC4TX6RUQ2PDL433\n-X KUBE-HP-WZN244WMAC2SFCZJ\nCOMMIT\n"
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.378806596Z" level=info msg="Closing host port tcp:80"
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.378845136Z" level=info msg="Closing host port tcp:443"
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.380129899Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.380145885Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.380310132Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-rbq4t Namespace:ingress-nginx ID:ff90f27733177a87e2aeee73adc7964d1034062a7f8adac535bb30d37e7727b4 UID:7fb8df77-b72c-4e81-bfa1-e89a8f2286f9 NetNS:/var/run/netns/23f06e9e-f58e-4b7d-ae63-0788236ff804 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.380429633Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-rbq4t from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.421524041Z" level=info msg="Stopped pod sandbox: ff90f27733177a87e2aeee73adc7964d1034062a7f8adac535bb30d37e7727b4" id=cfc4cedd-d7f5-455e-b32c-ebf795a7c035 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.703530405Z" level=info msg="Removing container: 7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91" id=a2ba22bc-3088-4ead-8eed-7443113236aa name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:43:26 addons-022322 crio[1033]: time="2024-09-15 06:43:26.716941127Z" level=info msg="Removed container 7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91: ingress-nginx/ingress-nginx-controller-bc57996ff-rbq4t/controller" id=a2ba22bc-3088-4ead-8eed-7443113236aa name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 15 06:43:28 addons-022322 crio[1033]: time="2024-09-15 06:43:28.654247596Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b236ffa-1a49-43a6-815e-511f2a39100a name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:43:28 addons-022322 crio[1033]: time="2024-09-15 06:43:28.654687565Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4b236ffa-1a49-43a6-815e-511f2a39100a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be7dda375439d       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   d00635454c734       nginx
	ebf8a7f6a2815       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   fd3e91b2fb80d       gcp-auth-89d5ffd79-f42ql
	837ba5352bdf5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              patch                     0                   2a98a5989fe40       ingress-nginx-admission-patch-9qczt
	8976c24f1a582       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   77e191d527a22       ingress-nginx-admission-create-kktzj
	a31e0f0167cc9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   a3eb6e2a55c01       metrics-server-84c5f94fbc-gv786
	e02acb9daf95c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             12 minutes ago      Running             local-path-provisioner    0                   45ad5754c4627       local-path-provisioner-86d989889c-dmzqm
	3e976270afdc6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   749159bde67b6       coredns-7c65d6cfc9-xrtf5
	f16ac41ad768c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   b981d61af6f0a       storage-provisioner
	8a93f6647ecee       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                             13 minutes ago      Running             kindnet-cni               0                   1db5bf8d5ef4a       kindnet-wj66m
	2357c6fca0125       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   ad944dd66325b       kube-proxy-gw7ff
	8cd403ba68b5e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   3704996f909cf       etcd-addons-022322
	cd45634612a50       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   1b2ea9f7b9f0a       kube-apiserver-addons-022322
	793a3d9d3aa84       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   0d8125e8ef959       kube-scheduler-addons-022322
	b6d57c6bce9ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   f6b2699e528bd       kube-controller-manager-addons-022322
	
	
	==> coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] <==
	[INFO] 10.244.0.18:53657 - 14329 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107885s
	[INFO] 10.244.0.18:57900 - 62309 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073666s
	[INFO] 10.244.0.18:57900 - 27259 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112682s
	[INFO] 10.244.0.18:51135 - 25280 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004514344s
	[INFO] 10.244.0.18:51135 - 65484 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005682544s
	[INFO] 10.244.0.18:37446 - 3615 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007024634s
	[INFO] 10.244.0.18:37446 - 35842 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008710763s
	[INFO] 10.244.0.18:58524 - 29672 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004764629s
	[INFO] 10.244.0.18:58524 - 27116 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007955396s
	[INFO] 10.244.0.18:36601 - 30204 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000108259s
	[INFO] 10.244.0.18:36601 - 46072 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175121s
	[INFO] 10.244.0.21:59154 - 7876 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214034s
	[INFO] 10.244.0.21:52693 - 54985 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000104888s
	[INFO] 10.244.0.21:53529 - 47590 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129252s
	[INFO] 10.244.0.21:51668 - 52873 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189752s
	[INFO] 10.244.0.21:47297 - 8172 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109168s
	[INFO] 10.244.0.21:45975 - 40007 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014485s
	[INFO] 10.244.0.21:52233 - 54039 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007424492s
	[INFO] 10.244.0.21:38833 - 7325 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.010412323s
	[INFO] 10.244.0.21:52331 - 57813 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00775984s
	[INFO] 10.244.0.21:56895 - 26084 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.015034445s
	[INFO] 10.244.0.21:50418 - 4446 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006952543s
	[INFO] 10.244.0.21:60979 - 46705 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008386519s
	[INFO] 10.244.0.21:44818 - 40867 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000749057s
	[INFO] 10.244.0.21:53307 - 22244 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000849441s
	
	
	==> describe nodes <==
	Name:               addons-022322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-022322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-022322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_30_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-022322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:30:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-022322
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:43:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:31:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-022322
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f53fbb4eb4047c3b38331dd58a0e17d
	  System UUID:                b20760c2-a565-423c-88fb-0ebf81478f0b
	  Boot ID:                    d7eb9d55-e244-423e-b0bb-fd0ad06c12bb
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-m2kmg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-f42ql                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-xrtf5                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-addons-022322                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-wj66m                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-addons-022322                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-022322                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-gw7ff                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-022322                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-gv786                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         13m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  local-path-storage          local-path-provisioner-86d989889c-dmzqm                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-022322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-022322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-022322 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node addons-022322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node addons-022322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node addons-022322 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node addons-022322 event: Registered Node addons-022322 in Controller
	  Normal   NodeReady                12m                kubelet          Node addons-022322 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003031] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000695] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000704] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000612] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000625] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.600975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.568733] kauditd_printk_skb: 46 callbacks suppressed
	[Sep15 06:41] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +1.004271] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +2.015809] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +4.127715] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +8.191377] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[ +16.126848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[Sep15 06:42] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	
	
	==> etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] <==
	{"level":"info","ts":"2024-09-15T06:30:23.937675Z","caller":"traceutil/trace.go:171","msg":"trace[1964839114] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:452; }","duration":"112.53835ms","start":"2024-09-15T06:30:23.825128Z","end":"2024-09-15T06:30:23.937666Z","steps":["trace[1964839114] 'agreement among raft nodes before linearized reading'  (duration: 107.412384ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:23.937678Z","caller":"traceutil/trace.go:171","msg":"trace[1725845484] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:452; }","duration":"112.729433ms","start":"2024-09-15T06:30:23.824939Z","end":"2024-09-15T06:30:23.937668Z","steps":["trace[1725845484] 'agreement among raft nodes before linearized reading'  (duration: 107.731581ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.041975Z","caller":"traceutil/trace.go:171","msg":"trace[224570116] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"105.124293ms","start":"2024-09-15T06:30:23.936813Z","end":"2024-09-15T06:30:24.041937Z","steps":["trace[224570116] 'process raft request'  (duration: 96.117583ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.042456Z","caller":"traceutil/trace.go:171","msg":"trace[2126800427] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"104.80218ms","start":"2024-09-15T06:30:23.937643Z","end":"2024-09-15T06:30:24.042445Z","steps":["trace[2126800427] 'process raft request'  (duration: 104.169503ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.042842Z","caller":"traceutil/trace.go:171","msg":"trace[1060131522] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"103.35953ms","start":"2024-09-15T06:30:23.939467Z","end":"2024-09-15T06:30:24.042827Z","steps":["trace[1060131522] 'process raft request'  (duration: 103.268287ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.043080Z","caller":"traceutil/trace.go:171","msg":"trace[68935875] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:464; }","duration":"103.319532ms","start":"2024-09-15T06:30:23.939753Z","end":"2024-09-15T06:30:24.043073Z","steps":["trace[68935875] 'read index received'  (duration: 951.239µs)","trace[68935875] 'applied index is now lower than readState.Index'  (duration: 102.367501ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:30:24.043144Z","caller":"traceutil/trace.go:171","msg":"trace[1100533312] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"101.898324ms","start":"2024-09-15T06:30:23.941239Z","end":"2024-09-15T06:30:24.043137Z","steps":["trace[1100533312] 'process raft request'  (duration: 101.57996ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.043319Z","caller":"traceutil/trace.go:171","msg":"trace[1710169861] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"100.573964ms","start":"2024-09-15T06:30:23.942734Z","end":"2024-09-15T06:30:24.043308Z","steps":["trace[1710169861] 'process raft request'  (duration: 100.142047ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.044239Z","caller":"traceutil/trace.go:171","msg":"trace[430677801] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"101.345814ms","start":"2024-09-15T06:30:23.942848Z","end":"2024-09-15T06:30:24.044194Z","steps":["trace[430677801] 'process raft request'  (duration: 100.096168ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.044393Z","caller":"traceutil/trace.go:171","msg":"trace[1553361540] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"101.362761ms","start":"2024-09-15T06:30:23.943022Z","end":"2024-09-15T06:30:24.044385Z","steps":["trace[1553361540] 'process raft request'  (duration: 99.949501ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.043567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.801903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.044478Z","caller":"traceutil/trace.go:171","msg":"trace[303371796] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:460; }","duration":"104.72355ms","start":"2024-09-15T06:30:23.939748Z","end":"2024-09-15T06:30:24.044472Z","steps":["trace[303371796] 'agreement among raft nodes before linearized reading'  (duration: 103.480693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.631766Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.736254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.631932Z","caller":"traceutil/trace.go:171","msg":"trace[331691259] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:509; }","duration":"102.903775ms","start":"2024-09-15T06:30:24.528987Z","end":"2024-09-15T06:30:24.631891Z","steps":["trace[331691259] 'agreement among raft nodes before linearized reading'  (duration: 102.691356ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.724544Z","caller":"traceutil/trace.go:171","msg":"trace[1058426745] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"180.002052ms","start":"2024-09-15T06:30:24.544515Z","end":"2024-09-15T06:30:24.724517Z","steps":["trace[1058426745] 'process raft request'  (duration: 179.769517ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.724704Z","caller":"traceutil/trace.go:171","msg":"trace[911394532] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:522; }","duration":"179.289521ms","start":"2024-09-15T06:30:24.545401Z","end":"2024-09-15T06:30:24.724690Z","steps":["trace[911394532] 'read index received'  (duration: 92.846516ms)","trace[911394532] 'applied index is now lower than readState.Index'  (duration: 86.442357ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:30:24.724875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.201545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.724952Z","caller":"traceutil/trace.go:171","msg":"trace[1209499748] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:517; }","duration":"180.28811ms","start":"2024-09-15T06:30:24.544654Z","end":"2024-09-15T06:30:24.724942Z","steps":["trace[1209499748] 'agreement among raft nodes before linearized reading'  (duration: 180.146303ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.725112Z","caller":"traceutil/trace.go:171","msg":"trace[1655529344] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"180.345425ms","start":"2024-09-15T06:30:24.544758Z","end":"2024-09-15T06:30:24.725104Z","steps":["trace[1655529344] 'process raft request'  (duration: 179.83428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.725298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.355901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:30:24.725370Z","caller":"traceutil/trace.go:171","msg":"trace[994147516] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:517; }","duration":"180.432064ms","start":"2024-09-15T06:30:24.544929Z","end":"2024-09-15T06:30:24.725361Z","steps":["trace[994147516] 'agreement among raft nodes before linearized reading'  (duration: 180.340916ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:31:52.950195Z","caller":"traceutil/trace.go:171","msg":"trace[1140578157] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"103.841586ms","start":"2024-09-15T06:31:52.846338Z","end":"2024-09-15T06:31:52.950180Z","steps":["trace[1140578157] 'process raft request'  (duration: 103.740777ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:40:10.962662Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1592}
	{"level":"info","ts":"2024-09-15T06:40:10.985039Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1592,"took":"21.96857ms","hash":12553061,"current-db-size-bytes":6156288,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3473408,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-15T06:40:10.985077Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":12553061,"revision":1592,"compact-revision":-1}
	
	
	==> gcp-auth [ebf8a7f6a28156c5630a4cc474404dbbe134dc27b13486fc221e2c64f628f1f0] <==
	2024/09/15 06:32:34 Ready to write response ...
	2024/09/15 06:32:34 Ready to marshal response ...
	2024/09/15 06:32:34 Ready to write response ...
	2024/09/15 06:40:47 Ready to marshal response ...
	2024/09/15 06:40:47 Ready to write response ...
	2024/09/15 06:40:50 Ready to marshal response ...
	2024/09/15 06:40:50 Ready to write response ...
	2024/09/15 06:40:54 Ready to marshal response ...
	2024/09/15 06:40:54 Ready to write response ...
	2024/09/15 06:41:00 Ready to marshal response ...
	2024/09/15 06:41:00 Ready to write response ...
	2024/09/15 06:41:15 Ready to marshal response ...
	2024/09/15 06:41:15 Ready to write response ...
	2024/09/15 06:41:43 Ready to marshal response ...
	2024/09/15 06:41:43 Ready to write response ...
	2024/09/15 06:41:43 Ready to marshal response ...
	2024/09/15 06:41:43 Ready to write response ...
	2024/09/15 06:42:02 Ready to marshal response ...
	2024/09/15 06:42:02 Ready to write response ...
	2024/09/15 06:42:02 Ready to marshal response ...
	2024/09/15 06:42:02 Ready to write response ...
	2024/09/15 06:42:02 Ready to marshal response ...
	2024/09/15 06:42:02 Ready to write response ...
	2024/09/15 06:43:21 Ready to marshal response ...
	2024/09/15 06:43:21 Ready to write response ...
	
	
	==> kernel <==
	 06:43:31 up 26 min,  0 users,  load average: 1.33, 0.54, 0.38
	Linux addons-022322 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] <==
	I0915 06:41:22.741893       1 main.go:299] handling current node
	I0915 06:41:32.741392       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:32.741448       1 main.go:299] handling current node
	I0915 06:41:42.741427       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:42.741479       1 main.go:299] handling current node
	I0915 06:41:52.741302       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:52.741343       1 main.go:299] handling current node
	I0915 06:42:02.741787       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:02.741852       1 main.go:299] handling current node
	I0915 06:42:12.741308       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:12.741340       1 main.go:299] handling current node
	I0915 06:42:22.741302       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:22.741341       1 main.go:299] handling current node
	I0915 06:42:32.744421       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:32.744471       1 main.go:299] handling current node
	I0915 06:42:42.741735       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:42.741773       1 main.go:299] handling current node
	I0915 06:42:52.744330       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:52.744364       1 main.go:299] handling current node
	I0915 06:43:02.742184       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:43:02.742238       1 main.go:299] handling current node
	I0915 06:43:12.748266       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:43:12.748297       1 main.go:299] handling current node
	I0915 06:43:22.741717       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:43:22.741754       1 main.go:299] handling current node
	
	
	==> kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0915 06:32:23.177804       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0915 06:40:57.536268       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:55822: read: connection reset by peer
	I0915 06:41:00.195132       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 06:41:00.358287       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.223.215"}
	I0915 06:41:02.972541       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0915 06:41:31.847713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.847786       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.860136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.860240       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.861510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.861559       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.873408       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.873456       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.927412       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.927451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:41:32.862071       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:41:32.928023       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0915 06:41:33.025299       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 06:41:37.637716       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:41:38.658596       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:42:02.158625       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.241.155"}
	I0915 06:43:21.460078       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.209.137"}
	
	
	==> kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] <==
	W0915 06:42:47.496555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:47.496604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:42:48.637209       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="92.271µs"
	I0915 06:42:48.652351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.719117ms"
	I0915 06:42:48.652479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="75.187µs"
	W0915 06:42:49.315032       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:42:49.315075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:42:54.628967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="7.54µs"
	W0915 06:43:02.152871       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:02.152912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:43:04.721608       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0915 06:43:17.532387       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:17.532429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:43:17.876944       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:17.876984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:43:19.616115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-022322"
	I0915 06:43:21.266001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.426223ms"
	I0915 06:43:21.271019       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.974371ms"
	I0915 06:43:21.271103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.58µs"
	I0915 06:43:21.277124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.872µs"
	W0915 06:43:21.570080       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:43:21.570119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:43:23.220701       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0915 06:43:23.220853       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0915 06:43:23.226436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="22.554µs"
	
	
	==> kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] <==
	I0915 06:30:21.834006       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:30:23.436123       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:30:23.436244       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:30:23.828735       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:30:23.920347       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:30:24.020895       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:30:24.021810       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:30:24.021862       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:30:24.023838       1 config.go:199] "Starting service config controller"
	I0915 06:30:24.035431       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:30:24.037976       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:30:24.024321       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:30:24.038178       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:30:24.038213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:30:24.024295       1 config.go:328] "Starting node config controller"
	I0915 06:30:24.038343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:30:24.138804       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] <==
	E0915 06:30:12.440324       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:30:12.439968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 06:30:12.440368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 06:30:12.440396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:12.440004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:30:12.440436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:12.440042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:12.440462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.324810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:30:13.324857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.358300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:30:13.358343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.387669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:30:13.387710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.459534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:13.459576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.464687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:30:13.464727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.561227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:13.561268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.591583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:30:13.591620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.632014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:30:13.632056       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:30:16.638356       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:43:22 addons-022322 kubelet[1653]: I0915 06:43:22.454101    1653 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bwg54\" (UniqueName: \"kubernetes.io/projected/5079ffa6-3a78-4f89-b9b1-96c20fca6fb6-kube-api-access-bwg54\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:43:22 addons-022322 kubelet[1653]: I0915 06:43:22.693775    1653 scope.go:117] "RemoveContainer" containerID="b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6"
	Sep 15 06:43:22 addons-022322 kubelet[1653]: I0915 06:43:22.708288    1653 scope.go:117] "RemoveContainer" containerID="b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6"
	Sep 15 06:43:22 addons-022322 kubelet[1653]: E0915 06:43:22.708793    1653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6\": container with ID starting with b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6 not found: ID does not exist" containerID="b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6"
	Sep 15 06:43:22 addons-022322 kubelet[1653]: I0915 06:43:22.708839    1653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6"} err="failed to get container status \"b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6\": rpc error: code = NotFound desc = could not find container \"b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6\": container with ID starting with b9b5e44789caa10617a3424c9a19ad8359197a037e3559bf0b10d3c2aa8ad3b6 not found: ID does not exist"
	Sep 15 06:43:24 addons-022322 kubelet[1653]: I0915 06:43:24.656023    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481ae87b-3189-4145-833a-86297031c70e" path="/var/lib/kubelet/pods/481ae87b-3189-4145-833a-86297031c70e/volumes"
	Sep 15 06:43:24 addons-022322 kubelet[1653]: I0915 06:43:24.656581    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5079ffa6-3a78-4f89-b9b1-96c20fca6fb6" path="/var/lib/kubelet/pods/5079ffa6-3a78-4f89-b9b1-96c20fca6fb6/volumes"
	Sep 15 06:43:24 addons-022322 kubelet[1653]: I0915 06:43:24.656951    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65afeb7-e8a2-4ef4-a30c-68588a990df9" path="/var/lib/kubelet/pods/e65afeb7-e8a2-4ef4-a30c-68588a990df9/volumes"
	Sep 15 06:43:24 addons-022322 kubelet[1653]: E0915 06:43:24.878648    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382604878436734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:43:24 addons-022322 kubelet[1653]: E0915 06:43:24.878679    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382604878436734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: E0915 06:43:26.344546    1653 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: E0915 06:43:26.344610    1653 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: E0915 06:43:26.344880    1653 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:helper-pod,Image:docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79,Command:[/bin/sh /script/setup],Args:[-p /opt/local-path-provisioner/pvc-a939ce70-1255-4d35-b78f-729a550689f6_default_test-pvc -s 67108864 -m Filesystem],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:VOL_DIR,Value:/opt/local-path-provisioner/pvc-a939ce70-1255-4d35-b78f-729a550689f6_default_test-pvc,ValueFrom:nil,},EnvVar{Name:VOL_MODE,Value:Filesystem,ValueFrom:nil,},EnvVar{Name:VOL_SIZE_BYTES,Value:67108864,ValueFrom:nil,},EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value
:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:script,ReadOnly:false,MountPath:/script,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:data,ReadOnly:false,MountPath:/opt/local-path-provisioner/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{
},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6_local-path-storage(2a163c6c-fb90-4f42-8156-5f00dc9a2fa2): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: E0915 06:43:26.346456    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6" podUID="2a163c6c-fb90-4f42-8156-5f00dc9a2fa2"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.480529    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fb8df77-b72c-4e81-bfa1-e89a8f2286f9-webhook-cert\") pod \"7fb8df77-b72c-4e81-bfa1-e89a8f2286f9\" (UID: \"7fb8df77-b72c-4e81-bfa1-e89a8f2286f9\") "
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.480572    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55wcr\" (UniqueName: \"kubernetes.io/projected/7fb8df77-b72c-4e81-bfa1-e89a8f2286f9-kube-api-access-55wcr\") pod \"7fb8df77-b72c-4e81-bfa1-e89a8f2286f9\" (UID: \"7fb8df77-b72c-4e81-bfa1-e89a8f2286f9\") "
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.482369    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fb8df77-b72c-4e81-bfa1-e89a8f2286f9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7fb8df77-b72c-4e81-bfa1-e89a8f2286f9" (UID: "7fb8df77-b72c-4e81-bfa1-e89a8f2286f9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.482376    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb8df77-b72c-4e81-bfa1-e89a8f2286f9-kube-api-access-55wcr" (OuterVolumeSpecName: "kube-api-access-55wcr") pod "7fb8df77-b72c-4e81-bfa1-e89a8f2286f9" (UID: "7fb8df77-b72c-4e81-bfa1-e89a8f2286f9"). InnerVolumeSpecName "kube-api-access-55wcr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.581625    1653 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7fb8df77-b72c-4e81-bfa1-e89a8f2286f9-webhook-cert\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.581657    1653 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-55wcr\" (UniqueName: \"kubernetes.io/projected/7fb8df77-b72c-4e81-bfa1-e89a8f2286f9-kube-api-access-55wcr\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.655731    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb8df77-b72c-4e81-bfa1-e89a8f2286f9" path="/var/lib/kubelet/pods/7fb8df77-b72c-4e81-bfa1-e89a8f2286f9/volumes"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.702505    1653 scope.go:117] "RemoveContainer" containerID="7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.717168    1653 scope.go:117] "RemoveContainer" containerID="7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: E0915 06:43:26.717518    1653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91\": container with ID starting with 7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91 not found: ID does not exist" containerID="7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91"
	Sep 15 06:43:26 addons-022322 kubelet[1653]: I0915 06:43:26.717566    1653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91"} err="failed to get container status \"7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91\": rpc error: code = NotFound desc = could not find container \"7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91\": container with ID starting with 7b86d41c025509e7948f5adcdd2a9d5b13119e8a35f8c0b1cbf2f224dc463a91 not found: ID does not exist"
	
	
	==> storage-provisioner [f16ac41ad768c5af72a289634ca7ed99edb67900cef177b81dd428a113bf6c28] <==
	I0915 06:31:03.471182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:31:03.479024       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:31:03.479069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:31:03.486210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:31:03.486362       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702!
	I0915 06:31:03.486750       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bf9f02c-8c94-46e0-beae-8c5e4ea3cb36", APIVersion:"v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702 became leader
	I0915 06:31:03.587216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-022322 -n addons-022322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-022322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox hello-world-app-55bf9c44b4-m2kmg test-local-path helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-022322 describe pod busybox hello-world-app-55bf9c44b4-m2kmg test-local-path helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-022322 describe pod busybox hello-world-app-55bf9c44b4-m2kmg test-local-path helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6: exit status 1 (75.050548ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-022322/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:32:34 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vj9bj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vj9bj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/busybox to addons-022322
	  Normal   Pulling    9m27s (x4 over 10m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m27s (x4 over 10m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m27s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m16s (x6 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    56s (x43 over 10m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             hello-world-app-55bf9c44b4-m2kmg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-022322/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:43:21 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zpqdq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zpqdq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-m2kmg to addons-022322
	  Normal  Pulling    11s   kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxctw (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xxctw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-022322 describe pod busybox hello-world-app-55bf9c44b4-m2kmg test-local-path helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6: exit status 1
--- FAIL: TestAddons/parallel/Ingress (152.37s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.74251ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gv786" [f7898557-9596-4239-9fab-1fce4db35921] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002454108s
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (72.25913ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 10m22.698970358s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (66.060309ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 10m25.58900404s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (64.732847ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 10m31.767658466s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (76.968108ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 10m37.51447439s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (64.632462ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 10m51.668426264s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (63.611225ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 10m59.811847287s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (59.900423ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 11m13.4333674s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (58.468931ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 11m53.388761009s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (59.220817ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 12m38.841203413s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (62.13666ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 13m14.406222946s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (60.480875ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 14m22.434563678s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (59.849321ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 15m2.264285279s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-022322 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-022322 top pods -n kube-system: exit status 1 (62.525346ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xrtf5, age: 16m3.213783736s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-022322
helpers_test.go:235: (dbg) docker inspect addons-022322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982",
	        "Created": "2024-09-15T06:29:57.902403759Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14686,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:29:58.035217085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/hostname",
	        "HostsPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/hosts",
	        "LogPath": "/var/lib/docker/containers/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982/f987f02b7bf012fb84f957cfb64ffc433110bc16cb68819a3279940874727982-json.log",
	        "Name": "/addons-022322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-022322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-022322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2-init/diff:/var/lib/docker/overlay2/41629ade7f7315f2df14bde3ca812850a45d34be79d1a0e1cd0df4510f198eaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/17399bd9caba346cff51ba5495243a00fc4f98007164c7f721ba31a37718ced2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-022322",
	                "Source": "/var/lib/docker/volumes/addons-022322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-022322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-022322",
	                "name.minikube.sigs.k8s.io": "addons-022322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4341f423acc3b63be59cc1466a91768de2aedaeeb73f44de65907efa3e283439",
	            "SandboxKey": "/var/run/docker/netns/4341f423acc3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-022322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a799b0ec0fecd5a4bd23fbed4e9986ab3cc570dd08d36ddf5fd2808b6a2d36c8",
	                    "EndpointID": "55c8c593338908cf9c9befd1f38c515f233792dcedb45ab4037d822354db546e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-022322",
	                        "f987f02b7bf0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-022322 -n addons-022322
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 logs -n 25: (1.157683145s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-319436              | download-only-319436   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-993247              | download-only-993247   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | download-docker-583228 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | download-docker-583228               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-583228            | download-docker-583228 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | --download-only -p                   | binary-mirror-350163   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | binary-mirror-350163                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33455               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-350163              | binary-mirror-350163   | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| addons  | enable dashboard -p                  | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| start   | -p addons-022322 --wait=true         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:32 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:40 UTC | 15 Sep 24 06:40 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:40 UTC | 15 Sep 24 06:40 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ssh     | addons-022322 ssh curl -s            | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| addons  | addons-022322 addons                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-022322 addons                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| ip      | addons-022322 ip                     | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:41 UTC | 15 Sep 24 06:41 UTC |
	|         | -p addons-022322                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | addons-022322                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | -p addons-022322                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:42 UTC | 15 Sep 24 06:42 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-022322 ip                     | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-022322 addons disable         | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:43 UTC | 15 Sep 24 06:43 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-022322 addons                 | addons-022322          | jenkins | v1.34.0 | 15 Sep 24 06:46 UTC | 15 Sep 24 06:46 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:34.409975   13892 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:34.410248   13892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.410258   13892 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:34.410265   13892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:34.410441   13892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:29:34.411031   13892 out.go:352] Setting JSON to false
	I0915 06:29:34.411877   13892 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":725,"bootTime":1726381049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:29:34.411966   13892 start.go:139] virtualization: kvm guest
	I0915 06:29:34.414135   13892 out.go:177] * [addons-022322] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:29:34.415403   13892 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:29:34.415427   13892 notify.go:220] Checking for updates...
	I0915 06:29:34.417886   13892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:34.419006   13892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:29:34.420065   13892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:29:34.421040   13892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:29:34.422082   13892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:29:34.423276   13892 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:34.444416   13892 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:34.444507   13892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.493618   13892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 06:29:34.484777495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:34.493719   13892 docker.go:318] overlay module found
	I0915 06:29:34.495531   13892 out.go:177] * Using the docker driver based on user configuration
	I0915 06:29:34.496714   13892 start.go:297] selected driver: docker
	I0915 06:29:34.496727   13892 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:34.496737   13892 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:29:34.497458   13892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:34.540933   13892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-15 06:29:34.532425836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:34.541099   13892 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:34.541411   13892 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:29:34.543067   13892 out.go:177] * Using Docker driver with root privileges
	I0915 06:29:34.544470   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:29:34.544531   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:29:34.544548   13892 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:29:34.544621   13892 start.go:340] cluster config:
	{Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:34.546120   13892 out.go:177] * Starting "addons-022322" primary control-plane node in "addons-022322" cluster
	I0915 06:29:34.547257   13892 cache.go:121] Beginning downloading kic base image for docker with crio
	I0915 06:29:34.548470   13892 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:29:34.549705   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:29:34.549737   13892 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0915 06:29:34.549743   13892 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:34.549740   13892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:29:34.549818   13892 preload.go:172] Found /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0915 06:29:34.549828   13892 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0915 06:29:34.550188   13892 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json ...
	I0915 06:29:34.550215   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json: {Name:mk75eadabcf88a1e80943e1d313c0ac3326c2ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:29:34.564904   13892 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:34.565023   13892 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:29:34.565042   13892 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:29:34.565047   13892 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:29:34.565054   13892 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:29:34.565061   13892 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:29:46.068469   13892 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:29:46.068505   13892 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:29:46.068552   13892 start.go:360] acquireMachinesLock for addons-022322: {Name:mk8cc43910e6fc14b57d745cb90cbe44d561ca46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:29:46.068638   13892 start.go:364] duration metric: took 67.597µs to acquireMachinesLock for "addons-022322"
	I0915 06:29:46.068659   13892 start.go:93] Provisioning new machine with config: &{Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:29:46.068733   13892 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:29:46.070467   13892 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:29:46.070716   13892 start.go:159] libmachine.API.Create for "addons-022322" (driver="docker")
	I0915 06:29:46.070750   13892 client.go:168] LocalClient.Create starting
	I0915 06:29:46.070843   13892 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem
	I0915 06:29:46.153955   13892 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem
	I0915 06:29:46.229474   13892 cli_runner.go:164] Run: docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:29:46.245025   13892 cli_runner.go:211] docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:29:46.245103   13892 network_create.go:284] running [docker network inspect addons-022322] to gather additional debugging logs...
	I0915 06:29:46.245124   13892 cli_runner.go:164] Run: docker network inspect addons-022322
	W0915 06:29:46.260140   13892 cli_runner.go:211] docker network inspect addons-022322 returned with exit code 1
	I0915 06:29:46.260172   13892 network_create.go:287] error running [docker network inspect addons-022322]: docker network inspect addons-022322: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-022322 not found
	I0915 06:29:46.260189   13892 network_create.go:289] output of [docker network inspect addons-022322]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-022322 not found
	
	** /stderr **
	I0915 06:29:46.260306   13892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:29:46.275634   13892 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000722ff0}
	I0915 06:29:46.275681   13892 network_create.go:124] attempt to create docker network addons-022322 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:29:46.275724   13892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-022322 addons-022322
	I0915 06:29:46.333701   13892 network_create.go:108] docker network addons-022322 192.168.49.0/24 created
	I0915 06:29:46.333733   13892 kic.go:121] calculated static IP "192.168.49.2" for the "addons-022322" container
	I0915 06:29:46.333805   13892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:29:46.348257   13892 cli_runner.go:164] Run: docker volume create addons-022322 --label name.minikube.sigs.k8s.io=addons-022322 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:29:46.364683   13892 oci.go:103] Successfully created a docker volume addons-022322
	I0915 06:29:46.364749   13892 cli_runner.go:164] Run: docker run --rm --name addons-022322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --entrypoint /usr/bin/test -v addons-022322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:29:53.558650   13892 cli_runner.go:217] Completed: docker run --rm --name addons-022322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --entrypoint /usr/bin/test -v addons-022322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (7.19385898s)
	I0915 06:29:53.558683   13892 oci.go:107] Successfully prepared a docker volume addons-022322
	I0915 06:29:53.558702   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:29:53.558719   13892 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:29:53.558765   13892 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-022322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:29:57.843175   13892 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-022322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.284379385s)
	I0915 06:29:57.843202   13892 kic.go:203] duration metric: took 4.284480255s to extract preloaded images to volume ...
	W0915 06:29:57.843320   13892 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:29:57.843484   13892 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:29:57.888235   13892 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-022322 --name addons-022322 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-022322 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-022322 --network addons-022322 --ip 192.168.49.2 --volume addons-022322:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:29:58.195371   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Running}}
	I0915 06:29:58.213384   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.231552   13892 cli_runner.go:164] Run: docker exec addons-022322 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:29:58.274993   13892 oci.go:144] the created container "addons-022322" has a running status.
	I0915 06:29:58.275022   13892 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa...
	I0915 06:29:58.414826   13892 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:29:58.438897   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.455371   13892 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:29:58.455390   13892 kic_runner.go:114] Args: [docker exec --privileged addons-022322 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:29:58.500533   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:29:58.517370   13892 machine.go:93] provisionDockerMachine start ...
	I0915 06:29:58.517454   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:29:58.541070   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:29:58.541337   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:29:58.541359   13892 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:29:58.542136   13892 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45940->127.0.0.1:32768: read: connection reset by peer
	I0915 06:30:01.671607   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-022322
	
	I0915 06:30:01.671636   13892 ubuntu.go:169] provisioning hostname "addons-022322"
	I0915 06:30:01.671686   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:01.688450   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:01.688643   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:01.688659   13892 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-022322 && echo "addons-022322" | sudo tee /etc/hostname
	I0915 06:30:01.830097   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-022322
	
	I0915 06:30:01.830160   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:01.847238   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:01.847398   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:01.847416   13892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-022322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-022322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-022322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:30:01.976277   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:30:01.976304   13892 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-5979/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-5979/.minikube}
	I0915 06:30:01.976347   13892 ubuntu.go:177] setting up certificates
	I0915 06:30:01.976360   13892 provision.go:84] configureAuth start
	I0915 06:30:01.976418   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:01.992863   13892 provision.go:143] copyHostCerts
	I0915 06:30:01.992932   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/ca.pem (1082 bytes)
	I0915 06:30:01.993032   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/cert.pem (1123 bytes)
	I0915 06:30:01.993090   13892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-5979/.minikube/key.pem (1679 bytes)
	I0915 06:30:01.993138   13892 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem org=jenkins.addons-022322 san=[127.0.0.1 192.168.49.2 addons-022322 localhost minikube]
	I0915 06:30:02.152480   13892 provision.go:177] copyRemoteCerts
	I0915 06:30:02.152547   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:30:02.152581   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.169072   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.264370   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0915 06:30:02.285061   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:30:02.305376   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 06:30:02.325505   13892 provision.go:87] duration metric: took 349.132448ms to configureAuth
	I0915 06:30:02.325532   13892 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:30:02.325690   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:02.325794   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.342353   13892 main.go:141] libmachine: Using SSH client type: native
	I0915 06:30:02.342515   13892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0915 06:30:02.342529   13892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0915 06:30:02.557166   13892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0915 06:30:02.557186   13892 machine.go:96] duration metric: took 4.039795692s to provisionDockerMachine
	I0915 06:30:02.557198   13892 client.go:171] duration metric: took 16.486440184s to LocalClient.Create
	I0915 06:30:02.557211   13892 start.go:167] duration metric: took 16.486496436s to libmachine.API.Create "addons-022322"
	I0915 06:30:02.557220   13892 start.go:293] postStartSetup for "addons-022322" (driver="docker")
	I0915 06:30:02.557232   13892 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:30:02.557296   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:30:02.557345   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.573470   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.668798   13892 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:30:02.671706   13892 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:30:02.671735   13892 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:30:02.671743   13892 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:30:02.671751   13892 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:30:02.671763   13892 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-5979/.minikube/addons for local assets ...
	I0915 06:30:02.671828   13892 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-5979/.minikube/files for local assets ...
	I0915 06:30:02.671860   13892 start.go:296] duration metric: took 114.633114ms for postStartSetup
	I0915 06:30:02.672224   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:02.688735   13892 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/config.json ...
	I0915 06:30:02.688986   13892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:30:02.689026   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.704764   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.792641   13892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:30:02.797055   13892 start.go:128] duration metric: took 16.728306999s to createHost
	I0915 06:30:02.797078   13892 start.go:83] releasing machines lock for "addons-022322", held for 16.728428922s
	I0915 06:30:02.797129   13892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-022322
	I0915 06:30:02.813813   13892 ssh_runner.go:195] Run: cat /version.json
	I0915 06:30:02.813860   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.813912   13892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:30:02.813966   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:02.831602   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.832784   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:02.923562   13892 ssh_runner.go:195] Run: systemctl --version
	I0915 06:30:02.995566   13892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0915 06:30:03.130869   13892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:30:03.134959   13892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:03.151986   13892 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:30:03.152064   13892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:30:03.177621   13892 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:30:03.177641   13892 start.go:495] detecting cgroup driver to use...
	I0915 06:30:03.177677   13892 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:30:03.177720   13892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 06:30:03.191256   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 06:30:03.200792   13892 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:30:03.200832   13892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:30:03.212398   13892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:30:03.224680   13892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:30:03.296606   13892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:30:03.380521   13892 docker.go:233] disabling docker service ...
	I0915 06:30:03.380577   13892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:30:03.397309   13892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:30:03.407246   13892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:30:03.479912   13892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:30:03.557251   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:30:03.567181   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:30:03.580975   13892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0915 06:30:03.581028   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.589417   13892 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0915 06:30:03.589475   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.597938   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.606431   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.614878   13892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:30:03.622833   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.630960   13892 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.644352   13892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0915 06:30:03.652628   13892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:30:03.659670   13892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:30:03.666698   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:03.739739   13892 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0915 06:30:03.813327   13892 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0915 06:30:03.813394   13892 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0915 06:30:03.816594   13892 start.go:563] Will wait 60s for crictl version
	I0915 06:30:03.816637   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:30:03.819439   13892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:30:03.850136   13892 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0915 06:30:03.850230   13892 ssh_runner.go:195] Run: crio --version
	I0915 06:30:03.884035   13892 ssh_runner.go:195] Run: crio --version
	I0915 06:30:03.917786   13892 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0915 06:30:03.918938   13892 cli_runner.go:164] Run: docker network inspect addons-022322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:30:03.934390   13892 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:30:03.937713   13892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:03.947346   13892 kubeadm.go:883] updating cluster {Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:30:03.947459   13892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0915 06:30:03.947520   13892 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:04.005083   13892 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:30:04.005102   13892 crio.go:433] Images already preloaded, skipping extraction
	I0915 06:30:04.005148   13892 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:30:04.035478   13892 crio.go:514] all images are preloaded for cri-o runtime.
	I0915 06:30:04.035500   13892 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:30:04.035509   13892 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0915 06:30:04.035628   13892 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-022322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:30:04.035702   13892 ssh_runner.go:195] Run: crio config
	I0915 06:30:04.075458   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:30:04.075479   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:30:04.075490   13892 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:30:04.075516   13892 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-022322 NodeName:addons-022322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:30:04.075684   13892 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-022322"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:30:04.075747   13892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:30:04.083565   13892 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:30:04.083629   13892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:30:04.091035   13892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0915 06:30:04.106246   13892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:30:04.121787   13892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0915 06:30:04.137021   13892 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:30:04.139971   13892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:30:04.149279   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:04.219995   13892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:04.231563   13892 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322 for IP: 192.168.49.2
	I0915 06:30:04.231583   13892 certs.go:194] generating shared ca certs ...
	I0915 06:30:04.231604   13892 certs.go:226] acquiring lock for ca certs: {Name:mkdad922548833f717724234d3dfea667af688cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.231715   13892 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key
	I0915 06:30:04.327854   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt ...
	I0915 06:30:04.327883   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt: {Name:mk88553ea6fe6b3bbcddbaf5fb4399b9d57d5f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.328061   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key ...
	I0915 06:30:04.328080   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key: {Name:mk24979239a9d34f46352c8e1b862a8e1f67ff74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.328180   13892 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key
	I0915 06:30:04.431987   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt ...
	I0915 06:30:04.432015   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt: {Name:mk51bec24258c7187bbcfbda02cab37b09aca3d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.432183   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key ...
	I0915 06:30:04.432194   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key: {Name:mk16f3436fddecb64c7b08ccd6fc72cd1ef1fcbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.432279   13892 certs.go:256] generating profile certs ...
	I0915 06:30:04.432331   13892 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key
	I0915 06:30:04.432352   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt with IP's: []
	I0915 06:30:04.586803   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt ...
	I0915 06:30:04.586831   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: {Name:mked263498a55efc2d51dcfb8a63fb9ec85dbcce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.586983   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key ...
	I0915 06:30:04.586993   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.key: {Name:mk512a1e1959bb23fe8a38640e6f78daabedd436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.587058   13892 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91
	I0915 06:30:04.587076   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:30:04.750681   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 ...
	I0915 06:30:04.750707   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91: {Name:mkee5aa0fd2cbaa659cee7dc8b42df64402edc7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.750854   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91 ...
	I0915 06:30:04.750867   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91: {Name:mk1e30234ffaa908afe95a4568f6afb8dd531545 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.750937   13892 certs.go:381] copying /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt.2ca64f91 -> /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt
	I0915 06:30:04.751005   13892 certs.go:385] copying /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key.2ca64f91 -> /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key
	I0915 06:30:04.751050   13892 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key
	I0915 06:30:04.751065   13892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt with IP's: []
	I0915 06:30:04.940019   13892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt ...
	I0915 06:30:04.940043   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt: {Name:mk350f05c318062bf8390e5793e0bce85435f32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.940196   13892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key ...
	I0915 06:30:04.940224   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key: {Name:mk6d8d46803827bdaeae91eab214ce101c0c0420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:04.940408   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca-key.pem (1679 bytes)
	I0915 06:30:04.940441   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/ca.pem (1082 bytes)
	I0915 06:30:04.940467   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:30:04.940491   13892 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-5979/.minikube/certs/key.pem (1679 bytes)
	I0915 06:30:04.941035   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:30:04.963000   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:30:04.983402   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:30:05.003697   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:30:05.024132   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:30:05.043937   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:30:05.063970   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:30:05.084090   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:30:05.104158   13892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:30:05.125016   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:30:05.140478   13892 ssh_runner.go:195] Run: openssl version
	I0915 06:30:05.145206   13892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:30:05.153254   13892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.156142   13892 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:30 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.156185   13892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:30:05.162089   13892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:30:05.169807   13892 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:30:05.172461   13892 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:30:05.172500   13892 kubeadm.go:392] StartCluster: {Name:addons-022322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-022322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:05.172563   13892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0915 06:30:05.172600   13892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:30:05.202825   13892 cri.go:89] found id: ""
	I0915 06:30:05.202888   13892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:30:05.210535   13892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:30:05.217839   13892 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:30:05.217879   13892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:30:05.225045   13892 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:30:05.225061   13892 kubeadm.go:157] found existing configuration files:
	
	I0915 06:30:05.225099   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:30:05.232105   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:30:05.232161   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:30:05.238944   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:30:05.245833   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:30:05.245876   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:30:05.252619   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:30:05.259724   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:30:05.259769   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:30:05.266638   13892 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:30:05.273591   13892 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:30:05.273634   13892 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:30:05.280379   13892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:30:05.310747   13892 kubeadm.go:310] W0915 06:30:05.310080    1295 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:05.311052   13892 kubeadm.go:310] W0915 06:30:05.310582    1295 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:30:05.327784   13892 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0915 06:30:05.372778   13892 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:30:15.409306   13892 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:30:15.409389   13892 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:30:15.409512   13892 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:30:15.409605   13892 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0915 06:30:15.409650   13892 kubeadm.go:310] OS: Linux
	I0915 06:30:15.409729   13892 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:30:15.409811   13892 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:30:15.409885   13892 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:30:15.409961   13892 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:30:15.410028   13892 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:30:15.410096   13892 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:30:15.410154   13892 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:30:15.410224   13892 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:30:15.410283   13892 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:30:15.410362   13892 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:30:15.410462   13892 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:30:15.410539   13892 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:30:15.410605   13892 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:30:15.412349   13892 out.go:235]   - Generating certificates and keys ...
	I0915 06:30:15.412446   13892 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:30:15.412504   13892 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:30:15.412593   13892 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:30:15.412685   13892 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:30:15.412743   13892 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:30:15.412790   13892 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:30:15.412843   13892 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:30:15.412979   13892 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-022322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:15.413045   13892 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:30:15.413211   13892 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-022322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:30:15.413278   13892 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:30:15.413348   13892 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:30:15.413417   13892 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:30:15.413497   13892 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:30:15.413543   13892 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:30:15.413596   13892 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:30:15.413651   13892 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:30:15.413711   13892 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:30:15.413763   13892 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:30:15.413833   13892 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:30:15.413920   13892 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:30:15.415294   13892 out.go:235]   - Booting up control plane ...
	I0915 06:30:15.415383   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:30:15.415472   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:30:15.415571   13892 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:30:15.415674   13892 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:30:15.415751   13892 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:30:15.415785   13892 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:30:15.415945   13892 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:30:15.416086   13892 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:30:15.416138   13892 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00131336s
	I0915 06:30:15.416214   13892 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:30:15.416267   13892 kubeadm.go:310] [api-check] The API server is healthy after 4.0019115s
	I0915 06:30:15.416369   13892 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:30:15.416471   13892 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:30:15.416520   13892 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:30:15.416688   13892 kubeadm.go:310] [mark-control-plane] Marking the node addons-022322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:30:15.416769   13892 kubeadm.go:310] [bootstrap-token] Using token: qtz71d.xvu8oxfcrox05ula
	I0915 06:30:15.418849   13892 out.go:235]   - Configuring RBAC rules ...
	I0915 06:30:15.418964   13892 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:30:15.419059   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:30:15.419214   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:30:15.419359   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:30:15.419468   13892 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:30:15.419543   13892 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:30:15.419648   13892 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:30:15.419706   13892 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:30:15.419754   13892 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:30:15.419760   13892 kubeadm.go:310] 
	I0915 06:30:15.419809   13892 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:30:15.419820   13892 kubeadm.go:310] 
	I0915 06:30:15.419907   13892 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:30:15.419917   13892 kubeadm.go:310] 
	I0915 06:30:15.419949   13892 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:30:15.420041   13892 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:30:15.420120   13892 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:30:15.420127   13892 kubeadm.go:310] 
	I0915 06:30:15.420230   13892 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:30:15.420239   13892 kubeadm.go:310] 
	I0915 06:30:15.420279   13892 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:30:15.420288   13892 kubeadm.go:310] 
	I0915 06:30:15.420336   13892 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:30:15.420404   13892 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:30:15.420486   13892 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:30:15.420494   13892 kubeadm.go:310] 
	I0915 06:30:15.420609   13892 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:30:15.420683   13892 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:30:15.420688   13892 kubeadm.go:310] 
	I0915 06:30:15.420761   13892 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qtz71d.xvu8oxfcrox05ula \
	I0915 06:30:15.420863   13892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b6fa81cefa24e7bb86a72fc94b64425479c808b0a0b074c57900fb8f22ced41 \
	I0915 06:30:15.420883   13892 kubeadm.go:310] 	--control-plane 
	I0915 06:30:15.420892   13892 kubeadm.go:310] 
	I0915 06:30:15.420975   13892 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:30:15.420984   13892 kubeadm.go:310] 
	I0915 06:30:15.421055   13892 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qtz71d.xvu8oxfcrox05ula \
	I0915 06:30:15.421162   13892 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b6fa81cefa24e7bb86a72fc94b64425479c808b0a0b074c57900fb8f22ced41 
	I0915 06:30:15.421174   13892 cni.go:84] Creating CNI manager for ""
	I0915 06:30:15.421186   13892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:30:15.422864   13892 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 06:30:15.424157   13892 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 06:30:15.427756   13892 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 06:30:15.427770   13892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 06:30:15.443978   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 06:30:15.630994   13892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:30:15.631066   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:15.631098   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-022322 minikube.k8s.io/updated_at=2024_09_15T06_30_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-022322 minikube.k8s.io/primary=true
	I0915 06:30:15.637726   13892 ops.go:34] apiserver oom_adj: -16
	I0915 06:30:15.740354   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:16.241041   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:16.740787   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:17.240556   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:17.741154   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:18.240693   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:18.740996   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:19.241363   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:19.740837   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:20.241069   13892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:30:20.301913   13892 kubeadm.go:1113] duration metric: took 4.670906624s to wait for elevateKubeSystemPrivileges
	I0915 06:30:20.301953   13892 kubeadm.go:394] duration metric: took 15.129453888s to StartCluster
	I0915 06:30:20.301974   13892 settings.go:142] acquiring lock: {Name:mk6128dee5a1f201e20204fc9647ceb1f8837444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:20.302067   13892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:30:20.302410   13892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-5979/kubeconfig: {Name:mkb9d32ea81cbb0fb472b94a2fbc3394fd0d5468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:20.302584   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:30:20.302603   13892 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0915 06:30:20.302674   13892 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:30:20.302780   13892 addons.go:69] Setting yakd=true in profile "addons-022322"
	I0915 06:30:20.302797   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:20.302809   13892 addons.go:234] Setting addon yakd=true in "addons-022322"
	I0915 06:30:20.302800   13892 addons.go:69] Setting ingress=true in profile "addons-022322"
	I0915 06:30:20.302811   13892 addons.go:69] Setting registry=true in profile "addons-022322"
	I0915 06:30:20.302830   13892 addons.go:234] Setting addon ingress=true in "addons-022322"
	I0915 06:30:20.302841   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302846   13892 addons.go:234] Setting addon registry=true in "addons-022322"
	I0915 06:30:20.302853   13892 addons.go:69] Setting default-storageclass=true in profile "addons-022322"
	I0915 06:30:20.302869   13892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-022322"
	I0915 06:30:20.302882   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302893   13892 addons.go:69] Setting metrics-server=true in profile "addons-022322"
	I0915 06:30:20.302896   13892 addons.go:69] Setting storage-provisioner=true in profile "addons-022322"
	I0915 06:30:20.302910   13892 addons.go:234] Setting addon storage-provisioner=true in "addons-022322"
	I0915 06:30:20.302915   13892 addons.go:234] Setting addon metrics-server=true in "addons-022322"
	I0915 06:30:20.302906   13892 addons.go:69] Setting inspektor-gadget=true in profile "addons-022322"
	I0915 06:30:20.302941   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302944   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302959   13892 addons.go:234] Setting addon inspektor-gadget=true in "addons-022322"
	I0915 06:30:20.302986   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.303062   13892 addons.go:69] Setting gcp-auth=true in profile "addons-022322"
	I0915 06:30:20.303085   13892 mustload.go:65] Loading cluster: addons-022322
	I0915 06:30:20.303201   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303250   13892 config.go:182] Loaded profile config "addons-022322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:30:20.303362   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303410   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303410   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303453   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303460   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303468   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.303767   13892 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-022322"
	I0915 06:30:20.303787   13892 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-022322"
	I0915 06:30:20.303811   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.302882   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.304488   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309326   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309817   13892 addons.go:69] Setting helm-tiller=true in profile "addons-022322"
	I0915 06:30:20.309849   13892 addons.go:234] Setting addon helm-tiller=true in "addons-022322"
	I0915 06:30:20.309887   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.310907   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.331963   13892 addons.go:69] Setting volcano=true in profile "addons-022322"
	I0915 06:30:20.332020   13892 addons.go:234] Setting addon volcano=true in "addons-022322"
	I0915 06:30:20.332067   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.332190   13892 addons.go:69] Setting cloud-spanner=true in profile "addons-022322"
	I0915 06:30:20.332222   13892 addons.go:234] Setting addon cloud-spanner=true in "addons-022322"
	I0915 06:30:20.332251   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.332716   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.332771   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.302869   13892 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-022322"
	I0915 06:30:20.333031   13892 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-022322"
	I0915 06:30:20.333380   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.333586   13892 addons.go:69] Setting ingress-dns=true in profile "addons-022322"
	I0915 06:30:20.333604   13892 addons.go:234] Setting addon ingress-dns=true in "addons-022322"
	I0915 06:30:20.333652   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.334281   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.334862   13892 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-022322"
	I0915 06:30:20.334933   13892 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-022322"
	I0915 06:30:20.334982   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.335579   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.309325   13892 out.go:177] * Verifying Kubernetes components...
	I0915 06:30:20.337960   13892 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:30:20.338463   13892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:30:20.338120   13892 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:30:20.338351   13892 addons.go:69] Setting volumesnapshots=true in profile "addons-022322"
	I0915 06:30:20.338628   13892 addons.go:234] Setting addon volumesnapshots=true in "addons-022322"
	I0915 06:30:20.339467   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.339891   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:30:20.339905   13892 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:30:20.339941   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.342092   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.342452   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:30:20.342525   13892 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:30:20.342607   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.342971   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.346120   13892 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:30:20.347336   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:30:20.348659   13892 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:30:20.348674   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:20.348704   13892 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:30:20.349046   13892 addons.go:234] Setting addon default-storageclass=true in "addons-022322"
	I0915 06:30:20.349207   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.349642   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.351436   13892 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:30:20.351456   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:30:20.351509   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.352633   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:20.354116   13892 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:20.354130   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:30:20.354167   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.357730   13892 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:20.357783   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:30:20.357860   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.358885   13892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:30:20.360491   13892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:20.360511   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:30:20.360581   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.366477   13892 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:30:20.367705   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:30:20.367726   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:30:20.367773   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.373892   13892 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:20.373916   13892 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:30:20.373975   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.401143   13892 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-022322"
	I0915 06:30:20.401194   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:20.401670   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:20.404458   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:30:20.404531   13892 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:30:20.406526   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.412264   13892 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:30:20.412294   13892 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:30:20.412366   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.413394   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:30:20.414515   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W0915 06:30:20.415159   13892 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0915 06:30:20.416250   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.421239   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:30:20.425614   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:30:20.426998   13892 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:30:20.427153   13892 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:30:20.428255   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:30:20.428416   13892 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:20.428428   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:30:20.428481   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.428833   13892 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:20.428848   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:30:20.428892   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.430788   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.431752   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:30:20.431811   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:30:20.433923   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:30:20.433942   13892 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:30:20.433993   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.435738   13892 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:30:20.437159   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:30:20.437177   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:30:20.437225   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.445942   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.448432   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.456319   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.457008   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.466588   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.470634   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.470670   13892 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:30:20.471780   13892 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:30:20.472972   13892 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:20.472989   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:30:20.473040   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:20.475108   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:30:20.477999   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.481170   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.488919   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.489280   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.493975   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:20.729415   13892 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:30:20.832138   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:30:20.832170   13892 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:30:20.842928   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:30:20.842956   13892 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:30:20.843447   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:30:20.845491   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:30:20.845517   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:30:20.935961   13892 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:30:20.935990   13892 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:30:21.020819   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:30:21.020845   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:30:21.022344   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:30:21.022633   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:30:21.028470   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:30:21.028540   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:30:21.036612   13892 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:30:21.036638   13892 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:30:21.043861   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:30:21.044948   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:30:21.044984   13892 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:30:21.129298   13892 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:30:21.129392   13892 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:30:21.132074   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:30:21.136371   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:30:21.140305   13892 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:30:21.140374   13892 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:30:21.223515   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:30:21.223615   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:30:21.231836   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:30:21.231864   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:30:21.323974   13892 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:21.323999   13892 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:30:21.324884   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:30:21.324911   13892 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:30:21.329210   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:30:21.335116   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:30:21.343630   13892 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:21.343660   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:30:21.423095   13892 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:30:21.423183   13892 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:30:21.439939   13892 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:30:21.439989   13892 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:30:21.521606   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:30:21.521696   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:30:21.537275   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:30:21.621192   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:30:21.621282   13892 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:30:21.724452   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:30:21.724539   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:30:21.737909   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:30:21.739858   13892 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:30:21.739880   13892 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:30:21.925913   13892 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.450763214s)
	I0915 06:30:21.925946   13892 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:30:21.927074   13892 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.197633331s)
	I0915 06:30:21.927844   13892 node_ready.go:35] waiting up to 6m0s for node "addons-022322" to be "Ready" ...
	I0915 06:30:21.938668   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:30:21.938695   13892 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:30:22.131212   13892 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:22.131302   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:30:22.227350   13892 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:30:22.227434   13892 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:30:22.337579   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.494097126s)
	I0915 06:30:22.424841   13892 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:30:22.424937   13892 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:30:22.426572   13892 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:22.426594   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:30:22.441869   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:30:22.441902   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:30:22.625349   13892 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:30:22.625431   13892 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:30:22.625749   13892 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-022322" context rescaled to 1 replicas
	I0915 06:30:22.722472   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:30:22.737559   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:22.830732   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:30:22.830830   13892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:30:22.941338   13892 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:22.941417   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:30:23.037465   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:30:23.037557   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:30:23.131738   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:30:23.527823   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:30:23.527862   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:30:23.635288   13892 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:23.635379   13892 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:30:23.939243   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:23.941219   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:30:24.842837   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.820395677s)
	I0915 06:30:24.843012   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.820265268s)
	I0915 06:30:26.241838   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.19793849s)
	I0915 06:30:26.241872   13892 addons.go:475] Verifying addon ingress=true in "addons-022322"
	I0915 06:30:26.241927   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.109761671s)
	I0915 06:30:26.241965   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.105507724s)
	I0915 06:30:26.242074   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.912777866s)
	I0915 06:30:26.242143   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.906961401s)
	I0915 06:30:26.242274   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.704908207s)
	I0915 06:30:26.242305   13892 addons.go:475] Verifying addon metrics-server=true in "addons-022322"
	I0915 06:30:26.242321   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.504383825s)
	I0915 06:30:26.242338   13892 addons.go:475] Verifying addon registry=true in "addons-022322"
	I0915 06:30:26.242376   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.519818065s)
	I0915 06:30:26.243677   13892 out.go:177] * Verifying registry addon...
	I0915 06:30:26.243699   13892 out.go:177] * Verifying ingress addon...
	I0915 06:30:26.243677   13892 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-022322 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:30:26.245794   13892 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:30:26.246058   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:30:26.250360   13892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:30:26.250378   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:26.250570   13892 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:30:26.250588   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:26.430553   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:26.752630   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:26.753835   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:26.845459   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.107795434s)
	W0915 06:30:26.845502   13892 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:30:26.845528   13892 retry.go:31] will retry after 304.40675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:30:26.845567   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.713721026s)
	I0915 06:30:27.124607   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.18332755s)
	I0915 06:30:27.124648   13892 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-022322"
	I0915 06:30:27.126674   13892 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:30:27.128843   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:30:27.131216   13892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:30:27.131239   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:27.150966   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:30:27.248632   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:27.249242   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:27.566407   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:30:27.566474   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:27.584537   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:27.632194   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:27.750415   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:27.751081   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:27.841475   13892 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:30:27.934260   13892 addons.go:234] Setting addon gcp-auth=true in "addons-022322"
	I0915 06:30:27.934313   13892 host.go:66] Checking if "addons-022322" exists ...
	I0915 06:30:27.934813   13892 cli_runner.go:164] Run: docker container inspect addons-022322 --format={{.State.Status}}
	I0915 06:30:27.955612   13892 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:30:27.955667   13892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-022322
	I0915 06:30:27.970776   13892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/addons-022322/id_rsa Username:docker}
	I0915 06:30:28.135033   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:28.249556   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:28.250273   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:28.430964   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:28.631563   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:28.748977   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:28.749552   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.132354   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:29.249177   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:29.249568   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.633236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:29.750088   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:29.750636   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:29.859251   13892 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.708230768s)
	I0915 06:30:29.859418   13892 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.903782974s)
	I0915 06:30:29.861552   13892 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:30:29.863225   13892 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:30:29.864891   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:30:29.864910   13892 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:30:29.925719   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:30:29.925740   13892 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:30:29.943867   13892 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:30:29.943890   13892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:30:29.960393   13892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:30:30.132966   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:30.249143   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:30.249613   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:30.526051   13892 addons.go:475] Verifying addon gcp-auth=true in "addons-022322"
	I0915 06:30:30.527857   13892 out.go:177] * Verifying gcp-auth addon...
	I0915 06:30:30.530049   13892 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:30:30.532704   13892 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:30:30.532727   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:30.633796   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:30.749512   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:30.749926   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:30.930726   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:31.032992   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:31.132430   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:31.248998   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:31.249582   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:31.532095   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:31.631866   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:31.749423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:31.749735   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.033310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:32.131692   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:32.248944   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:32.249409   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.532440   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:32.632069   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:32.749426   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:32.749899   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:32.930811   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:33.033142   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:33.131445   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:33.249273   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:33.249696   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:33.533493   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:33.632131   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:33.749349   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:33.749683   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.033541   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:34.131638   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:34.249215   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:34.249571   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.533324   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:34.631916   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:34.749515   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:34.749960   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:34.931178   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:35.033423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:35.131815   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:35.249166   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:35.249432   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:35.532510   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:35.631903   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:35.749413   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:35.749752   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:36.032982   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:36.132490   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:36.248776   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:36.249119   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:36.533499   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:36.631988   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:36.749385   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:36.749758   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:37.033350   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:37.131770   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:37.249247   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:37.249628   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:37.430856   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:37.532843   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:37.632359   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:37.748704   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:37.749002   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:38.032752   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:38.132301   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:38.248619   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:38.249266   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:38.533360   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:38.631718   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:38.749031   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:38.749371   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:39.033571   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:39.132181   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:39.248407   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:39.248863   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:39.431113   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:39.533483   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:39.631970   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:39.749127   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:39.749498   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:40.032583   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:40.131976   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:40.249304   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:40.249738   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:40.533163   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:40.631473   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:40.748891   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:40.749468   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.032705   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:41.132285   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:41.248530   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:41.249032   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.533199   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:41.631596   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:41.748844   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:41.749922   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:41.931608   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:42.033113   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:42.131418   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:42.248812   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:42.249143   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:42.533306   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:42.631764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:42.748932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:42.749371   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:43.032478   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:43.131853   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:43.249088   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:43.249728   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:43.532884   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:43.632642   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:43.748599   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:43.749065   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:44.033602   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:44.132171   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:44.249344   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:44.249835   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:44.433599   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:44.532662   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:44.632181   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:44.748443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:44.748785   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:45.033368   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:45.131859   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:45.249263   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:45.249709   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:45.533096   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:45.631376   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:45.748955   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:45.749258   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.033511   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:46.132347   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:46.248739   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:46.249160   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.532647   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:46.632424   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:46.748779   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:46.749373   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:46.931183   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:47.033472   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:47.131786   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:47.249291   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:47.249573   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:47.533062   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:47.631443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:47.749019   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:47.749416   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:48.032697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:48.132659   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:48.249020   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:48.249401   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:48.532863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:48.632443   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:48.748984   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:48.749413   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:49.032778   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:49.132449   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:49.248740   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:49.249158   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:49.430379   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:49.532894   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:49.632308   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:49.748689   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:49.749158   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:50.033151   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:50.131571   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:50.249014   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:50.249328   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:50.532829   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:50.632333   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:50.748757   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:50.749169   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:51.033369   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:51.131932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:51.249267   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:51.249658   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:51.430918   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:51.533471   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:51.632010   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:51.749072   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:51.749695   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:52.033468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:52.131895   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:52.249214   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:52.249830   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:52.533324   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:52.631661   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:52.749011   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:52.749470   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:53.033460   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:53.131849   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:53.249377   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:53.249709   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:53.431009   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:53.533596   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:53.632155   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:53.748462   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:53.748914   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:54.033214   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:54.131618   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:54.249008   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:54.249448   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:54.533042   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:54.632633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:54.748999   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:54.749588   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:55.033799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:55.132232   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:55.248600   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:55.248972   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:55.431132   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:55.533498   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:55.632249   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:55.748409   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:55.748799   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:56.033232   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:56.131633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:56.249087   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:56.249443   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:56.532853   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:56.632090   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:56.748878   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:56.748892   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:57.032670   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:57.132402   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:57.248887   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:57.249314   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:57.431495   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:30:57.532764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:57.632398   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:57.748750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:57.749249   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:58.032988   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:58.132605   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:58.248826   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:58.249443   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:58.533246   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:58.632466   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:58.748323   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:58.748971   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.033150   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:59.131282   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:59.248607   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:59.249030   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.533380   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:30:59.631811   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:30:59.749264   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:30:59.749909   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:30:59.930808   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:31:00.033110   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:00.131575   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:00.248601   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:00.248948   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:00.533625   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:00.632215   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:00.748540   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:00.749110   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:01.033691   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:01.132060   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:01.249399   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:01.249913   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:01.533411   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:01.631698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:01.749129   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:01.749394   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.032821   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:02.132265   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:02.248609   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:02.249248   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.431210   13892 node_ready.go:53] node "addons-022322" has status "Ready":"False"
	I0915 06:31:02.533582   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:02.632031   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:02.749318   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:02.749753   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:02.938686   13892 node_ready.go:49] node "addons-022322" has status "Ready":"True"
	I0915 06:31:02.938772   13892 node_ready.go:38] duration metric: took 41.010898206s for node "addons-022322" to be "Ready" ...
	I0915 06:31:02.938800   13892 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:31:02.947092   13892 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.037453   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:03.134905   13892 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:31:03.134932   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:03.249093   13892 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:31:03.249112   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:03.249662   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:03.534546   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:03.636557   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:03.751133   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:03.751759   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:03.952699   13892 pod_ready.go:93] pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.952725   13892 pod_ready.go:82] duration metric: took 1.005603448s for pod "coredns-7c65d6cfc9-xrtf5" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.952743   13892 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.956791   13892 pod_ready.go:93] pod "etcd-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.956833   13892 pod_ready.go:82] duration metric: took 4.073042ms for pod "etcd-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.956850   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.960877   13892 pod_ready.go:93] pod "kube-apiserver-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.960900   13892 pod_ready.go:82] duration metric: took 4.034597ms for pod "kube-apiserver-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.960911   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.965260   13892 pod_ready.go:93] pod "kube-controller-manager-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:03.965283   13892 pod_ready.go:82] duration metric: took 4.363575ms for pod "kube-controller-manager-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:03.965299   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gw7ff" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.033697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:04.132473   13892 pod_ready.go:93] pod "kube-proxy-gw7ff" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:04.132554   13892 pod_ready.go:82] duration metric: took 167.246699ms for pod "kube-proxy-gw7ff" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.132578   13892 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.136244   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:04.251490   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:04.252243   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:04.533023   13892 pod_ready.go:93] pod "kube-scheduler-addons-022322" in "kube-system" namespace has status "Ready":"True"
	I0915 06:31:04.533103   13892 pod_ready.go:82] duration metric: took 400.506171ms for pod "kube-scheduler-addons-022322" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.533131   13892 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace to be "Ready" ...
	I0915 06:31:04.533863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:04.634658   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:04.749985   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:04.750620   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.033858   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:05.133473   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:05.249607   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:05.250016   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:05.533512   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:05.633522   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:05.749567   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:05.750619   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.033337   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:06.132883   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.251011   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.251171   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:06.533695   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:06.537858   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:06.633310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:06.749666   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:06.750659   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.033710   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:07.133859   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.250107   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.250514   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:07.533553   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:07.633929   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:07.749698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:07.750015   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.033127   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:08.132358   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.249375   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.250351   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:08.533052   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:08.538331   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:08.632846   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:08.750600   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:08.751091   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.033893   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:09.133772   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.249846   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.250485   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:09.533541   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:09.634329   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:09.749468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:09.749927   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.032951   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:10.133703   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.249374   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:10.250142   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:10.533264   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:10.634824   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:10.749724   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:10.749950   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.033288   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:11.038713   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:11.133046   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.249103   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.249357   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:11.533301   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:11.632698   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:11.749784   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:11.750069   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.033157   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:12.132818   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.249697   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.250174   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:12.533250   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:12.633141   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:12.749453   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:12.749779   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.033165   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:13.132738   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.249754   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.250133   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:13.533097   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:13.537943   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:13.635262   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:13.749235   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:13.749608   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.033344   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:14.134224   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.250178   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.250386   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:14.532745   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:14.632274   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:14.749463   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:14.749574   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.032578   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:15.132543   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.249733   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.250131   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:15.533283   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:15.635694   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:15.749500   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:15.749903   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.033326   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:16.037154   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:16.132492   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.249928   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.250220   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:16.533621   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:16.633765   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:16.749606   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:16.750083   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.033424   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:17.133632   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.249099   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.249293   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:17.533944   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:17.635728   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:17.749747   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:17.749845   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.033242   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:18.133749   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.248979   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.249435   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:18.533485   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:18.537953   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:18.634427   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:18.749507   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:18.750729   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.033132   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:19.133614   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.250070   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:19.250669   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.533209   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:19.634429   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:19.749576   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:19.750000   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.033510   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:20.133879   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.250067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.250469   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:20.533633   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:20.633286   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:20.749441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:20.749850   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.032951   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:21.037580   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:21.133010   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.249096   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.249327   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:21.533841   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:21.636703   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:21.750045   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:21.750258   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.033777   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:22.133441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.250313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.250819   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:22.533952   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:22.632273   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:22.749762   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:22.750018   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.033083   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:23.037994   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:23.133419   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.249942   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.250259   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:23.533730   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:23.633468   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:23.749343   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:23.749675   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.034567   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:24.133677   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.249854   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.250284   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:24.533692   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:24.635572   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:24.749613   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:24.749916   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.033066   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:25.038206   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:25.132536   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.249706   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.250366   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:25.533750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:25.633778   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:25.750162   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:25.750492   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.032739   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:26.133178   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.249808   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.250389   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:26.533398   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:26.632980   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:26.749044   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:26.749242   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.033678   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:27.132456   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.249550   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.249778   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:27.532989   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:27.537774   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:27.632926   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:27.749383   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:27.749640   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.033168   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:28.132791   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.249100   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.249491   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:28.533927   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:28.633791   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:28.750246   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:28.750586   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.034176   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:29.134799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.326913   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.328515   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:29.533911   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:29.538178   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:29.634297   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:29.750998   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:29.751378   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.033198   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.133588   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.249814   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.250074   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:30.533173   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:30.634738   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:30.749679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:30.750305   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.033423   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.133414   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.250044   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.251160   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:31.533304   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:31.633864   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:31.750141   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:31.750451   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.033133   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.037779   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:32.136313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.249954   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.250075   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:32.533300   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:32.633419   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:32.749736   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:32.749765   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.034007   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.133723   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.251986   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.252651   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:33.533521   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:33.632441   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:33.749489   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:33.750028   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.033420   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.133332   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.249806   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.250249   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:34.534059   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:34.537695   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:34.633237   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:34.749972   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:34.750523   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.033433   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.134668   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.249067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.249280   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:35.533868   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:35.633700   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:35.751799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:35.752239   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.033863   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.135788   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.261209   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.261484   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:36.534169   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:36.538356   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:36.635005   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:36.749444   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:36.749741   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.033143   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.134759   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.249201   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.249293   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:37.533999   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:37.633966   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:37.749679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:37.750282   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.034292   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.135654   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.248750   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.249021   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:38.533563   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:38.538901   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:38.634050   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:38.750025   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:38.750354   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.033208   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.134881   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.250167   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.250578   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:39.533950   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:39.633617   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:39.749971   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:39.750223   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.033298   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.134948   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.249689   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.249968   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:40.533359   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:40.633818   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:40.749314   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:40.750010   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.033236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.037513   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:41.132679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.249029   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.249263   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:41.533936   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:41.633190   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:41.749449   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:41.749911   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.033106   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.133817   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.249836   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.250431   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:42.535637   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:42.633862   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:42.749067   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:31:42.749419   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.033542   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.038254   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:43.132986   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.249533   13892 kapi.go:107] duration metric: took 1m17.003470316s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:31:43.249679   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:43.533132   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:43.635084   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:43.824289   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.034118   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.135800   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.250034   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:44.533788   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:44.634382   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:44.825384   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.035081   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.041001   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:45.134128   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.324267   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:45.532799   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:45.634388   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:45.750074   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.033800   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.133411   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.249977   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:46.533385   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:46.633892   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:46.749200   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.033644   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.133340   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.254798   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:47.534822   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:47.538268   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:47.633121   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:47.750145   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.034050   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.133341   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.249584   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:48.534071   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:48.633605   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:48.749704   13892 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:31:49.033188   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.134519   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:49.250183   13892 kapi.go:107] duration metric: took 1m23.00438592s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:31:49.533890   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:49.538762   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:49.635540   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.033558   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.134427   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:50.533564   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:50.633920   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.033803   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.133735   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:51.533829   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:51.632841   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.033313   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.038094   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:52.133649   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:52.533764   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:52.633086   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.033466   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.134242   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:53.533335   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:53.632408   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.033715   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.133140   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:54.533484   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:54.538357   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:54.633319   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.033334   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.135308   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:31:55.534278   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:55.632743   13892 kapi.go:107] duration metric: took 1m28.503900328s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:31:56.033022   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:56.533339   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.033408   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:57.037428   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:57.533745   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.033869   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:58.561194   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.033310   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:31:59.037527   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:31:59.533635   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.033679   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:00.533809   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.033525   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.532938   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:01.538141   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:02.033393   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:02.533588   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.033570   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.534054   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:03.538193   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:04.033637   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:04.533236   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.033082   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:05.533172   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.033825   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:06.037689   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:06.533490   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.033488   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:07.533224   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.033746   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:08.038349   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:08.532934   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.035261   13892 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:32:09.533246   13892 kapi.go:107] duration metric: took 1m39.003196071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:32:09.535024   13892 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-022322 cluster.
	I0915 06:32:09.536557   13892 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:32:09.537938   13892 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:32:09.539455   13892 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, helm-tiller, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0915 06:32:09.540834   13892 addons.go:510] duration metric: took 1m49.238162954s for enable addons: enabled=[default-storageclass storage-provisioner ingress-dns nvidia-device-plugin helm-tiller cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0915 06:32:10.055748   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:12.538990   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:15.038859   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:17.539022   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:20.038101   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:22.038820   13892 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"False"
	I0915 06:32:23.537933   13892 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace has status "Ready":"True"
	I0915 06:32:23.537954   13892 pod_ready.go:82] duration metric: took 1m19.004805064s for pod "metrics-server-84c5f94fbc-gv786" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.537962   13892 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.541840   13892 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace has status "Ready":"True"
	I0915 06:32:23.541860   13892 pod_ready.go:82] duration metric: took 3.891408ms for pod "nvidia-device-plugin-daemonset-7x4t6" in "kube-system" namespace to be "Ready" ...
	I0915 06:32:23.541876   13892 pod_ready.go:39] duration metric: took 1m20.602996157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:32:23.541894   13892 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:32:23.541935   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:23.541985   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:23.576334   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:23.576356   13892 cri.go:89] found id: ""
	I0915 06:32:23.576365   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:23.576422   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.579515   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:23.579565   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:23.612826   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:23.612848   13892 cri.go:89] found id: ""
	I0915 06:32:23.612859   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:23.612912   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.615937   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:23.616004   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:23.648343   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:23.648362   13892 cri.go:89] found id: ""
	I0915 06:32:23.648370   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:23.648421   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.651502   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:23.651550   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:23.683263   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:23.683283   13892 cri.go:89] found id: ""
	I0915 06:32:23.683291   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:23.683342   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.686441   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:23.686492   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:23.718280   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:23.718303   13892 cri.go:89] found id: ""
	I0915 06:32:23.718311   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:23.718362   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.721633   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:23.721680   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:23.752697   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:23.752714   13892 cri.go:89] found id: ""
	I0915 06:32:23.752721   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:23.752768   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.755879   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:23.755942   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:23.787801   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:23.787820   13892 cri.go:89] found id: ""
	I0915 06:32:23.787826   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:23.787876   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:23.791129   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:23.791151   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:23.867026   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:23.867061   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:23.901983   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:23.902011   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:23.935110   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:23.935141   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:23.988900   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:23.988938   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:24.031371   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:24.031405   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:24.081347   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:24.081384   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:24.122044   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:24.122095   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:24.155921   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:24.155948   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:24.196166   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:24.196216   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:24.263412   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:24.263447   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:24.275361   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:24.275390   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:26.871834   13892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:32:26.884976   13892 api_server.go:72] duration metric: took 2m6.582339744s to wait for apiserver process to appear ...
	I0915 06:32:26.885002   13892 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:32:26.885037   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:26.885094   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:26.916059   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:26.916084   13892 cri.go:89] found id: ""
	I0915 06:32:26.916094   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:26.916150   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.919091   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:26.919141   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:26.950001   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:26.950025   13892 cri.go:89] found id: ""
	I0915 06:32:26.950041   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:26.950092   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.953219   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:26.953681   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:26.986623   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:26.986647   13892 cri.go:89] found id: ""
	I0915 06:32:26.986653   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:26.986697   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:26.989805   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:26.989862   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:27.020895   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:27.020916   13892 cri.go:89] found id: ""
	I0915 06:32:27.020923   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:27.020964   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.023987   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:27.024043   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:27.055667   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:27.055687   13892 cri.go:89] found id: ""
	I0915 06:32:27.055695   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:27.055736   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.058824   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:27.058872   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:27.090021   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:27.090042   13892 cri.go:89] found id: ""
	I0915 06:32:27.090049   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:27.090092   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.093202   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:27.093251   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:27.125406   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:27.125425   13892 cri.go:89] found id: ""
	I0915 06:32:27.125431   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:27.125470   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:27.128687   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:27.128708   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:27.221426   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:27.221463   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:27.264237   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:27.264271   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:27.310366   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:27.310397   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:27.343769   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:27.343796   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:27.374824   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:27.374856   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:27.430978   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:27.431014   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:27.466156   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:27.466183   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:27.534355   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:27.534389   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:27.572880   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:27.572907   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:27.650217   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:27.650248   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:27.689764   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:27.689790   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:30.201718   13892 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:32:30.205361   13892 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:32:30.206248   13892 api_server.go:141] control plane version: v1.31.1
	I0915 06:32:30.206274   13892 api_server.go:131] duration metric: took 3.321265546s to wait for apiserver health ...
	I0915 06:32:30.206281   13892 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:32:30.206300   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0915 06:32:30.206346   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0915 06:32:30.247576   13892 cri.go:89] found id: "cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:30.247601   13892 cri.go:89] found id: ""
	I0915 06:32:30.247616   13892 logs.go:276] 1 containers: [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0]
	I0915 06:32:30.247665   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.251237   13892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0915 06:32:30.251299   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0915 06:32:30.337514   13892 cri.go:89] found id: "8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:30.337535   13892 cri.go:89] found id: ""
	I0915 06:32:30.337542   13892 logs.go:276] 1 containers: [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071]
	I0915 06:32:30.337580   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.340694   13892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0915 06:32:30.340761   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0915 06:32:30.374248   13892 cri.go:89] found id: "3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:30.374270   13892 cri.go:89] found id: ""
	I0915 06:32:30.374277   13892 logs.go:276] 1 containers: [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2]
	I0915 06:32:30.374315   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.377794   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0915 06:32:30.377865   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0915 06:32:30.447654   13892 cri.go:89] found id: "793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:30.447678   13892 cri.go:89] found id: ""
	I0915 06:32:30.447687   13892 logs.go:276] 1 containers: [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7]
	I0915 06:32:30.447735   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.450965   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0915 06:32:30.451014   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0915 06:32:30.528575   13892 cri.go:89] found id: "2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:30.528594   13892 cri.go:89] found id: ""
	I0915 06:32:30.528601   13892 logs.go:276] 1 containers: [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f]
	I0915 06:32:30.528652   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.532059   13892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0915 06:32:30.532122   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0915 06:32:30.566547   13892 cri.go:89] found id: "b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:30.566565   13892 cri.go:89] found id: ""
	I0915 06:32:30.566572   13892 logs.go:276] 1 containers: [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317]
	I0915 06:32:30.566612   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.569834   13892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0915 06:32:30.569904   13892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0915 06:32:30.603072   13892 cri.go:89] found id: "8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:30.603098   13892 cri.go:89] found id: ""
	I0915 06:32:30.603109   13892 logs.go:276] 1 containers: [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f]
	I0915 06:32:30.603155   13892 ssh_runner.go:195] Run: which crictl
	I0915 06:32:30.606231   13892 logs.go:123] Gathering logs for dmesg ...
	I0915 06:32:30.606251   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:32:30.617438   13892 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:32:30.617461   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:32:30.726726   13892 logs.go:123] Gathering logs for kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] ...
	I0915 06:32:30.726754   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f"
	I0915 06:32:30.759609   13892 logs.go:123] Gathering logs for kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] ...
	I0915 06:32:30.759631   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317"
	I0915 06:32:30.814163   13892 logs.go:123] Gathering logs for kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] ...
	I0915 06:32:30.814196   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f"
	I0915 06:32:30.848586   13892 logs.go:123] Gathering logs for container status ...
	I0915 06:32:30.848611   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:32:30.889221   13892 logs.go:123] Gathering logs for kubelet ...
	I0915 06:32:30.889248   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:32:30.955679   13892 logs.go:123] Gathering logs for kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] ...
	I0915 06:32:30.955711   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0"
	I0915 06:32:31.010974   13892 logs.go:123] Gathering logs for etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] ...
	I0915 06:32:31.011012   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071"
	I0915 06:32:31.062696   13892 logs.go:123] Gathering logs for coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] ...
	I0915 06:32:31.062727   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2"
	I0915 06:32:31.097720   13892 logs.go:123] Gathering logs for kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] ...
	I0915 06:32:31.097751   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7"
	I0915 06:32:31.139225   13892 logs.go:123] Gathering logs for CRI-O ...
	I0915 06:32:31.139253   13892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0915 06:32:33.738096   13892 system_pods.go:59] 19 kube-system pods found
	I0915 06:32:33.738132   13892 system_pods.go:61] "coredns-7c65d6cfc9-xrtf5" [3d071306-6186-47d8-a38c-c09d0565172e] Running
	I0915 06:32:33.738138   13892 system_pods.go:61] "csi-hostpath-attacher-0" [b9779b21-66d4-497b-95ca-d4e3bb1f440d] Running
	I0915 06:32:33.738143   13892 system_pods.go:61] "csi-hostpath-resizer-0" [d1de7650-462e-48b8-a7c4-d41806ea999d] Running
	I0915 06:32:33.738146   13892 system_pods.go:61] "csi-hostpathplugin-r87k9" [55f95c6b-c8ef-44a8-8502-9101b3c1a6bc] Running
	I0915 06:32:33.738149   13892 system_pods.go:61] "etcd-addons-022322" [47de0033-c753-46fa-8a91-f22a259be595] Running
	I0915 06:32:33.738153   13892 system_pods.go:61] "kindnet-wj66m" [54288115-3d96-4604-8d43-05eb4463ffa4] Running
	I0915 06:32:33.738156   13892 system_pods.go:61] "kube-apiserver-addons-022322" [6deaca10-4203-4248-8a4f-6d69cd208f8d] Running
	I0915 06:32:33.738159   13892 system_pods.go:61] "kube-controller-manager-addons-022322" [91941bbe-e2ca-4927-8822-171a063ffbe7] Running
	I0915 06:32:33.738162   13892 system_pods.go:61] "kube-ingress-dns-minikube" [5079ffa6-3a78-4f89-b9b1-96c20fca6fb6] Running
	I0915 06:32:33.738166   13892 system_pods.go:61] "kube-proxy-gw7ff" [e4cb2a76-ff95-4461-9c14-70ee381b42b0] Running
	I0915 06:32:33.738169   13892 system_pods.go:61] "kube-scheduler-addons-022322" [6afa8b86-1784-40cf-a887-1e69ffa32f03] Running
	I0915 06:32:33.738172   13892 system_pods.go:61] "metrics-server-84c5f94fbc-gv786" [f7898557-9596-4239-9fab-1fce4db35921] Running
	I0915 06:32:33.738175   13892 system_pods.go:61] "nvidia-device-plugin-daemonset-7x4t6" [549d014b-a13d-466e-8959-d22764717045] Running
	I0915 06:32:33.738179   13892 system_pods.go:61] "registry-66c9cd494c-q5ztn" [d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b] Running
	I0915 06:32:33.738182   13892 system_pods.go:61] "registry-proxy-v7tht" [97f7a0a8-94e9-42f2-8e49-9731910d0d64] Running
	I0915 06:32:33.738185   13892 system_pods.go:61] "snapshot-controller-56fcc65765-h6nwh" [4b24f9d0-a988-4767-96ad-bf7e26d377ef] Running
	I0915 06:32:33.738188   13892 system_pods.go:61] "snapshot-controller-56fcc65765-kndfm" [402c59b1-bcf6-4b08-9646-8a21aed37020] Running
	I0915 06:32:33.738191   13892 system_pods.go:61] "storage-provisioner" [10257ad9-5003-4e70-ab68-778fc1738cc4] Running
	I0915 06:32:33.738193   13892 system_pods.go:61] "tiller-deploy-b48cc5f79-tpczq" [e9d5480f-8c59-4ab5-b5fc-a6fcd1801c51] Running
	I0915 06:32:33.738198   13892 system_pods.go:74] duration metric: took 3.531911981s to wait for pod list to return data ...
	I0915 06:32:33.738204   13892 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:32:33.740398   13892 default_sa.go:45] found service account: "default"
	I0915 06:32:33.740416   13892 default_sa.go:55] duration metric: took 2.207623ms for default service account to be created ...
	I0915 06:32:33.740424   13892 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:32:33.748862   13892 system_pods.go:86] 19 kube-system pods found
	I0915 06:32:33.748886   13892 system_pods.go:89] "coredns-7c65d6cfc9-xrtf5" [3d071306-6186-47d8-a38c-c09d0565172e] Running
	I0915 06:32:33.748892   13892 system_pods.go:89] "csi-hostpath-attacher-0" [b9779b21-66d4-497b-95ca-d4e3bb1f440d] Running
	I0915 06:32:33.748896   13892 system_pods.go:89] "csi-hostpath-resizer-0" [d1de7650-462e-48b8-a7c4-d41806ea999d] Running
	I0915 06:32:33.748900   13892 system_pods.go:89] "csi-hostpathplugin-r87k9" [55f95c6b-c8ef-44a8-8502-9101b3c1a6bc] Running
	I0915 06:32:33.748903   13892 system_pods.go:89] "etcd-addons-022322" [47de0033-c753-46fa-8a91-f22a259be595] Running
	I0915 06:32:33.748907   13892 system_pods.go:89] "kindnet-wj66m" [54288115-3d96-4604-8d43-05eb4463ffa4] Running
	I0915 06:32:33.748912   13892 system_pods.go:89] "kube-apiserver-addons-022322" [6deaca10-4203-4248-8a4f-6d69cd208f8d] Running
	I0915 06:32:33.748915   13892 system_pods.go:89] "kube-controller-manager-addons-022322" [91941bbe-e2ca-4927-8822-171a063ffbe7] Running
	I0915 06:32:33.748919   13892 system_pods.go:89] "kube-ingress-dns-minikube" [5079ffa6-3a78-4f89-b9b1-96c20fca6fb6] Running
	I0915 06:32:33.748922   13892 system_pods.go:89] "kube-proxy-gw7ff" [e4cb2a76-ff95-4461-9c14-70ee381b42b0] Running
	I0915 06:32:33.748927   13892 system_pods.go:89] "kube-scheduler-addons-022322" [6afa8b86-1784-40cf-a887-1e69ffa32f03] Running
	I0915 06:32:33.748935   13892 system_pods.go:89] "metrics-server-84c5f94fbc-gv786" [f7898557-9596-4239-9fab-1fce4db35921] Running
	I0915 06:32:33.748939   13892 system_pods.go:89] "nvidia-device-plugin-daemonset-7x4t6" [549d014b-a13d-466e-8959-d22764717045] Running
	I0915 06:32:33.748946   13892 system_pods.go:89] "registry-66c9cd494c-q5ztn" [d8dfbb0d-1d68-4db4-99e4-4313d7eedd6b] Running
	I0915 06:32:33.748949   13892 system_pods.go:89] "registry-proxy-v7tht" [97f7a0a8-94e9-42f2-8e49-9731910d0d64] Running
	I0915 06:32:33.748960   13892 system_pods.go:89] "snapshot-controller-56fcc65765-h6nwh" [4b24f9d0-a988-4767-96ad-bf7e26d377ef] Running
	I0915 06:32:33.748965   13892 system_pods.go:89] "snapshot-controller-56fcc65765-kndfm" [402c59b1-bcf6-4b08-9646-8a21aed37020] Running
	I0915 06:32:33.748970   13892 system_pods.go:89] "storage-provisioner" [10257ad9-5003-4e70-ab68-778fc1738cc4] Running
	I0915 06:32:33.748974   13892 system_pods.go:89] "tiller-deploy-b48cc5f79-tpczq" [e9d5480f-8c59-4ab5-b5fc-a6fcd1801c51] Running
	I0915 06:32:33.748983   13892 system_pods.go:126] duration metric: took 8.554163ms to wait for k8s-apps to be running ...
	I0915 06:32:33.748991   13892 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:32:33.749033   13892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:32:33.759914   13892 system_svc.go:56] duration metric: took 10.915717ms WaitForService to wait for kubelet
	I0915 06:32:33.759944   13892 kubeadm.go:582] duration metric: took 2m13.45731059s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:32:33.759970   13892 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:32:33.762677   13892 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0915 06:32:33.762700   13892 node_conditions.go:123] node cpu capacity is 8
	I0915 06:32:33.762712   13892 node_conditions.go:105] duration metric: took 2.737031ms to run NodePressure ...
	I0915 06:32:33.762722   13892 start.go:241] waiting for startup goroutines ...
	I0915 06:32:33.762728   13892 start.go:246] waiting for cluster config update ...
	I0915 06:32:33.762743   13892 start.go:255] writing updated cluster config ...
	I0915 06:32:33.762994   13892 ssh_runner.go:195] Run: rm -f paused
	I0915 06:32:33.810544   13892 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:32:33.812783   13892 out.go:177] * Done! kubectl is now configured to use "addons-022322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 15 06:45:30 addons-022322 crio[1033]: time="2024-09-15 06:45:30.696786807Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=eb7d0f7d-cbd3-467e-8505-fc086418f319 name=/runtime.v1.ImageService/PullImage
	Sep 15 06:45:30 addons-022322 crio[1033]: time="2024-09-15 06:45:30.698171340Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Sep 15 06:45:40 addons-022322 crio[1033]: time="2024-09-15 06:45:40.654366040Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b5408ee5-4655-456e-a6bd-af693fdb7eb9 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:45:40 addons-022322 crio[1033]: time="2024-09-15 06:45:40.654667920Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b5408ee5-4655-456e-a6bd-af693fdb7eb9 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:45:42 addons-022322 crio[1033]: time="2024-09-15 06:45:42.654280922Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=462f75d2-9084-4f21-89f0-8624ed996a9b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:45:42 addons-022322 crio[1033]: time="2024-09-15 06:45:42.654518187Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=462f75d2-9084-4f21-89f0-8624ed996a9b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:45:54 addons-022322 crio[1033]: time="2024-09-15 06:45:54.654664226Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c8824b56-5b2d-4664-b8f7-3a057e7a9d7b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:45:54 addons-022322 crio[1033]: time="2024-09-15 06:45:54.654686051Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=07394cb2-aa70-444d-be4b-41ccb53229b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:45:54 addons-022322 crio[1033]: time="2024-09-15 06:45:54.655004166Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=07394cb2-aa70-444d-be4b-41ccb53229b5 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:45:54 addons-022322 crio[1033]: time="2024-09-15 06:45:54.655062999Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c8824b56-5b2d-4664-b8f7-3a057e7a9d7b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:46:01 addons-022322 crio[1033]: time="2024-09-15 06:46:01.356873800Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e54f4d0f-bc04-4fd7-ae9d-a4e08c0d29d5 name=/runtime.v1.ImageService/PullImage
	Sep 15 06:46:01 addons-022322 crio[1033]: time="2024-09-15 06:46:01.373849049Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 15 06:46:01 addons-022322 crio[1033]: time="2024-09-15 06:46:01.988257993Z" level=info msg="Stopping pod sandbox: 19e5163f7466d13cf98a6d1d7694553bfc02dfe3055170eebfb47161a1e60c16" id=4956f149-1029-4160-ac01-af523ec385aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:46:01 addons-022322 crio[1033]: time="2024-09-15 06:46:01.988507945Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6 Namespace:local-path-storage ID:19e5163f7466d13cf98a6d1d7694553bfc02dfe3055170eebfb47161a1e60c16 UID:fd81388a-b9de-4e7d-8d84-0eefd7b1070a NetNS:/var/run/netns/9d7c2463-3700-4ede-b178-9647124040a9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:46:01 addons-022322 crio[1033]: time="2024-09-15 06:46:01.988654195Z" level=info msg="Deleting pod local-path-storage_helper-pod-create-pvc-a939ce70-1255-4d35-b78f-729a550689f6 from CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:46:02 addons-022322 crio[1033]: time="2024-09-15 06:46:02.027465527Z" level=info msg="Stopped pod sandbox: 19e5163f7466d13cf98a6d1d7694553bfc02dfe3055170eebfb47161a1e60c16" id=4956f149-1029-4160-ac01-af523ec385aa name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:46:06 addons-022322 crio[1033]: time="2024-09-15 06:46:06.654480047Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3be0020e-eb85-45b9-b679-24f986541445 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:46:06 addons-022322 crio[1033]: time="2024-09-15 06:46:06.654718435Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3be0020e-eb85-45b9-b679-24f986541445 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:46:14 addons-022322 crio[1033]: time="2024-09-15 06:46:14.934237096Z" level=info msg="Stopping pod sandbox: 19e5163f7466d13cf98a6d1d7694553bfc02dfe3055170eebfb47161a1e60c16" id=f0f4935d-d33f-4d11-b1a8-169771043951 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:46:14 addons-022322 crio[1033]: time="2024-09-15 06:46:14.934286613Z" level=info msg="Stopped pod sandbox (already stopped): 19e5163f7466d13cf98a6d1d7694553bfc02dfe3055170eebfb47161a1e60c16" id=f0f4935d-d33f-4d11-b1a8-169771043951 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 15 06:46:14 addons-022322 crio[1033]: time="2024-09-15 06:46:14.934607668Z" level=info msg="Removing pod sandbox: 19e5163f7466d13cf98a6d1d7694553bfc02dfe3055170eebfb47161a1e60c16" id=a56d29b6-26cf-414c-97b6-4140185f40dc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 15 06:46:14 addons-022322 crio[1033]: time="2024-09-15 06:46:14.939733535Z" level=info msg="Removed pod sandbox: 19e5163f7466d13cf98a6d1d7694553bfc02dfe3055170eebfb47161a1e60c16" id=a56d29b6-26cf-414c-97b6-4140185f40dc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 15 06:46:20 addons-022322 crio[1033]: time="2024-09-15 06:46:20.654553910Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=53280b3e-d6bc-43d5-b9a2-f6d6694fa758 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:46:20 addons-022322 crio[1033]: time="2024-09-15 06:46:20.654865688Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=53280b3e-d6bc-43d5-b9a2-f6d6694fa758 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:46:23 addons-022322 crio[1033]: time="2024-09-15 06:46:23.649214033Z" level=info msg="Stopping container: a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec (timeout: 30s)" id=72f22bdc-17e4-4221-a5fd-31492b1fa93c name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	be7dda375439d       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   d00635454c734       nginx
	ebf8a7f6a2815       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   fd3e91b2fb80d       gcp-auth-89d5ffd79-f42ql
	a31e0f0167cc9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   a3eb6e2a55c01       metrics-server-84c5f94fbc-gv786
	e02acb9daf95c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        15 minutes ago      Running             local-path-provisioner    0                   45ad5754c4627       local-path-provisioner-86d989889c-dmzqm
	3e976270afdc6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   749159bde67b6       coredns-7c65d6cfc9-xrtf5
	f16ac41ad768c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   b981d61af6f0a       storage-provisioner
	8a93f6647ecee       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                        16 minutes ago      Running             kindnet-cni               0                   1db5bf8d5ef4a       kindnet-wj66m
	2357c6fca0125       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        16 minutes ago      Running             kube-proxy                0                   ad944dd66325b       kube-proxy-gw7ff
	8cd403ba68b5e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   3704996f909cf       etcd-addons-022322
	cd45634612a50       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        16 minutes ago      Running             kube-apiserver            0                   1b2ea9f7b9f0a       kube-apiserver-addons-022322
	793a3d9d3aa84       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        16 minutes ago      Running             kube-scheduler            0                   0d8125e8ef959       kube-scheduler-addons-022322
	b6d57c6bce9ad       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        16 minutes ago      Running             kube-controller-manager   0                   f6b2699e528bd       kube-controller-manager-addons-022322
	
	
	==> coredns [3e976270afdc67fbff78ec15dcc37d6a77dd080e3554103503cbea4a014a64f2] <==
	[INFO] 10.244.0.18:53657 - 14329 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107885s
	[INFO] 10.244.0.18:57900 - 62309 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000073666s
	[INFO] 10.244.0.18:57900 - 27259 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112682s
	[INFO] 10.244.0.18:51135 - 25280 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004514344s
	[INFO] 10.244.0.18:51135 - 65484 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005682544s
	[INFO] 10.244.0.18:37446 - 3615 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007024634s
	[INFO] 10.244.0.18:37446 - 35842 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008710763s
	[INFO] 10.244.0.18:58524 - 29672 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004764629s
	[INFO] 10.244.0.18:58524 - 27116 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007955396s
	[INFO] 10.244.0.18:36601 - 30204 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000108259s
	[INFO] 10.244.0.18:36601 - 46072 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175121s
	[INFO] 10.244.0.21:59154 - 7876 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214034s
	[INFO] 10.244.0.21:52693 - 54985 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000104888s
	[INFO] 10.244.0.21:53529 - 47590 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129252s
	[INFO] 10.244.0.21:51668 - 52873 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189752s
	[INFO] 10.244.0.21:47297 - 8172 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109168s
	[INFO] 10.244.0.21:45975 - 40007 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00014485s
	[INFO] 10.244.0.21:52233 - 54039 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007424492s
	[INFO] 10.244.0.21:38833 - 7325 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.010412323s
	[INFO] 10.244.0.21:52331 - 57813 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00775984s
	[INFO] 10.244.0.21:56895 - 26084 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.015034445s
	[INFO] 10.244.0.21:50418 - 4446 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006952543s
	[INFO] 10.244.0.21:60979 - 46705 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008386519s
	[INFO] 10.244.0.21:44818 - 40867 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000749057s
	[INFO] 10.244.0.21:53307 - 22244 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000849441s
	
	
	==> describe nodes <==
	Name:               addons-022322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-022322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-022322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_30_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-022322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:30:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-022322
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:46:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:30:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:43:19 +0000   Sun, 15 Sep 2024 06:31:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-022322
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f53fbb4eb4047c3b38331dd58a0e17d
	  System UUID:                b20760c2-a565-423c-88fb-0ebf81478f0b
	  Boot ID:                    d7eb9d55-e244-423e-b0bb-fd0ad06c12bb
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-m2kmg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  gcp-auth                    gcp-auth-89d5ffd79-f42ql                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-xrtf5                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-addons-022322                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-wj66m                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-022322               250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-022322      200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-gw7ff                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-022322               100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-dmzqm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-022322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-022322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-022322 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node addons-022322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node addons-022322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                kubelet          Node addons-022322 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node addons-022322 event: Registered Node addons-022322 in Controller
	  Normal   NodeReady                15m                kubelet          Node addons-022322 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003031] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000660] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000615] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000695] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000704] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000612] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000625] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.600975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.568733] kauditd_printk_skb: 46 callbacks suppressed
	[Sep15 06:41] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +1.004271] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +2.015809] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +4.127715] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +8.191377] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[ +16.126848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[Sep15 06:42] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	
	
	==> etcd [8cd403ba68b5ebe17e67ecb4c594bb52e81ec3b0de1bfe39857e6bce3be18071] <==
	{"level":"info","ts":"2024-09-15T06:30:24.042456Z","caller":"traceutil/trace.go:171","msg":"trace[2126800427] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"104.80218ms","start":"2024-09-15T06:30:23.937643Z","end":"2024-09-15T06:30:24.042445Z","steps":["trace[2126800427] 'process raft request'  (duration: 104.169503ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.042842Z","caller":"traceutil/trace.go:171","msg":"trace[1060131522] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"103.35953ms","start":"2024-09-15T06:30:23.939467Z","end":"2024-09-15T06:30:24.042827Z","steps":["trace[1060131522] 'process raft request'  (duration: 103.268287ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.043080Z","caller":"traceutil/trace.go:171","msg":"trace[68935875] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:464; }","duration":"103.319532ms","start":"2024-09-15T06:30:23.939753Z","end":"2024-09-15T06:30:24.043073Z","steps":["trace[68935875] 'read index received'  (duration: 951.239µs)","trace[68935875] 'applied index is now lower than readState.Index'  (duration: 102.367501ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:30:24.043144Z","caller":"traceutil/trace.go:171","msg":"trace[1100533312] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"101.898324ms","start":"2024-09-15T06:30:23.941239Z","end":"2024-09-15T06:30:24.043137Z","steps":["trace[1100533312] 'process raft request'  (duration: 101.57996ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.043319Z","caller":"traceutil/trace.go:171","msg":"trace[1710169861] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"100.573964ms","start":"2024-09-15T06:30:23.942734Z","end":"2024-09-15T06:30:24.043308Z","steps":["trace[1710169861] 'process raft request'  (duration: 100.142047ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.044239Z","caller":"traceutil/trace.go:171","msg":"trace[430677801] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"101.345814ms","start":"2024-09-15T06:30:23.942848Z","end":"2024-09-15T06:30:24.044194Z","steps":["trace[430677801] 'process raft request'  (duration: 100.096168ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.044393Z","caller":"traceutil/trace.go:171","msg":"trace[1553361540] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"101.362761ms","start":"2024-09-15T06:30:23.943022Z","end":"2024-09-15T06:30:24.044385Z","steps":["trace[1553361540] 'process raft request'  (duration: 99.949501ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.043567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.801903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.044478Z","caller":"traceutil/trace.go:171","msg":"trace[303371796] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:460; }","duration":"104.72355ms","start":"2024-09-15T06:30:23.939748Z","end":"2024-09-15T06:30:24.044472Z","steps":["trace[303371796] 'agreement among raft nodes before linearized reading'  (duration: 103.480693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.631766Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.736254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.631932Z","caller":"traceutil/trace.go:171","msg":"trace[331691259] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:509; }","duration":"102.903775ms","start":"2024-09-15T06:30:24.528987Z","end":"2024-09-15T06:30:24.631891Z","steps":["trace[331691259] 'agreement among raft nodes before linearized reading'  (duration: 102.691356ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.724544Z","caller":"traceutil/trace.go:171","msg":"trace[1058426745] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"180.002052ms","start":"2024-09-15T06:30:24.544515Z","end":"2024-09-15T06:30:24.724517Z","steps":["trace[1058426745] 'process raft request'  (duration: 179.769517ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.724704Z","caller":"traceutil/trace.go:171","msg":"trace[911394532] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:522; }","duration":"179.289521ms","start":"2024-09-15T06:30:24.545401Z","end":"2024-09-15T06:30:24.724690Z","steps":["trace[911394532] 'read index received'  (duration: 92.846516ms)","trace[911394532] 'applied index is now lower than readState.Index'  (duration: 86.442357ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:30:24.724875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.201545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-15T06:30:24.724952Z","caller":"traceutil/trace.go:171","msg":"trace[1209499748] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:517; }","duration":"180.28811ms","start":"2024-09-15T06:30:24.544654Z","end":"2024-09-15T06:30:24.724942Z","steps":["trace[1209499748] 'agreement among raft nodes before linearized reading'  (duration: 180.146303ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:30:24.725112Z","caller":"traceutil/trace.go:171","msg":"trace[1655529344] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"180.345425ms","start":"2024-09-15T06:30:24.544758Z","end":"2024-09-15T06:30:24.725104Z","steps":["trace[1655529344] 'process raft request'  (duration: 179.83428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:30:24.725298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.355901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:30:24.725370Z","caller":"traceutil/trace.go:171","msg":"trace[994147516] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:517; }","duration":"180.432064ms","start":"2024-09-15T06:30:24.544929Z","end":"2024-09-15T06:30:24.725361Z","steps":["trace[994147516] 'agreement among raft nodes before linearized reading'  (duration: 180.340916ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:31:52.950195Z","caller":"traceutil/trace.go:171","msg":"trace[1140578157] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"103.841586ms","start":"2024-09-15T06:31:52.846338Z","end":"2024-09-15T06:31:52.950180Z","steps":["trace[1140578157] 'process raft request'  (duration: 103.740777ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:40:10.962662Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1592}
	{"level":"info","ts":"2024-09-15T06:40:10.985039Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1592,"took":"21.96857ms","hash":12553061,"current-db-size-bytes":6156288,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3473408,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-15T06:40:10.985077Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":12553061,"revision":1592,"compact-revision":-1}
	{"level":"info","ts":"2024-09-15T06:45:10.967548Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2012}
	{"level":"info","ts":"2024-09-15T06:45:10.983457Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2012,"took":"15.348392ms","hash":4138826877,"current-db-size-bytes":6156288,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":5017600,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-15T06:45:10.983498Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4138826877,"revision":2012,"compact-revision":1592}
	
	
	==> gcp-auth [ebf8a7f6a28156c5630a4cc474404dbbe134dc27b13486fc221e2c64f628f1f0] <==
	2024/09/15 06:32:34 Ready to write response ...
	2024/09/15 06:40:47 Ready to marshal response ...
	2024/09/15 06:40:47 Ready to write response ...
	2024/09/15 06:40:50 Ready to marshal response ...
	2024/09/15 06:40:50 Ready to write response ...
	2024/09/15 06:40:54 Ready to marshal response ...
	2024/09/15 06:40:54 Ready to write response ...
	2024/09/15 06:41:00 Ready to marshal response ...
	2024/09/15 06:41:00 Ready to write response ...
	2024/09/15 06:41:15 Ready to marshal response ...
	2024/09/15 06:41:15 Ready to write response ...
	2024/09/15 06:41:43 Ready to marshal response ...
	2024/09/15 06:41:43 Ready to write response ...
	2024/09/15 06:41:43 Ready to marshal response ...
	2024/09/15 06:41:43 Ready to write response ...
	2024/09/15 06:42:02 Ready to marshal response ...
	2024/09/15 06:42:02 Ready to write response ...
	2024/09/15 06:42:02 Ready to marshal response ...
	2024/09/15 06:42:02 Ready to write response ...
	2024/09/15 06:42:02 Ready to marshal response ...
	2024/09/15 06:42:02 Ready to write response ...
	2024/09/15 06:43:21 Ready to marshal response ...
	2024/09/15 06:43:21 Ready to write response ...
	2024/09/15 06:43:58 Ready to marshal response ...
	2024/09/15 06:43:58 Ready to write response ...
	
	
	==> kernel <==
	 06:46:24 up 28 min,  0 users,  load average: 0.22, 0.42, 0.36
	Linux addons-022322 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [8a93f6647eceea3eddd2e6053d720a5938564e0f909b43cbbe3d50a53215317f] <==
	I0915 06:44:22.741293       1 main.go:299] handling current node
	I0915 06:44:32.744264       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:44:32.744293       1 main.go:299] handling current node
	I0915 06:44:42.741336       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:44:42.741370       1 main.go:299] handling current node
	I0915 06:44:52.743840       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:44:52.743876       1 main.go:299] handling current node
	I0915 06:45:02.742169       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:45:02.742203       1 main.go:299] handling current node
	I0915 06:45:12.742025       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:45:12.742056       1 main.go:299] handling current node
	I0915 06:45:22.741334       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:45:22.741365       1 main.go:299] handling current node
	I0915 06:45:32.741298       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:45:32.741486       1 main.go:299] handling current node
	I0915 06:45:42.742154       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:45:42.742200       1 main.go:299] handling current node
	I0915 06:45:52.744880       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:45:52.744923       1 main.go:299] handling current node
	I0915 06:46:02.742152       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:46:02.742190       1 main.go:299] handling current node
	I0915 06:46:12.741486       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:46:12.741518       1 main.go:299] handling current node
	I0915 06:46:22.742002       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:46:22.742044       1 main.go:299] handling current node
	
	
	==> kube-apiserver [cd45634612a50e85f2d46fcf812b6b74f14247c4fa63d37eeea75a1f8976bcb0] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0915 06:32:23.177804       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0915 06:40:57.536268       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.25:55822: read: connection reset by peer
	I0915 06:41:00.195132       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 06:41:00.358287       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.223.215"}
	I0915 06:41:02.972541       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0915 06:41:31.847713       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.847786       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.860136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.860240       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.861510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.861559       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.873408       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.873456       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:41:31.927412       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:41:31.927451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:41:32.862071       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:41:32.928023       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0915 06:41:33.025299       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 06:41:37.637716       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:41:38.658596       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:42:02.158625       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.241.155"}
	I0915 06:43:21.460078       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.209.137"}
	I0915 06:46:24.216271       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [b6d57c6bce9ad2ad762193c1f9676439b20c4486a3079c63d9a400a56076a317] <==
	W0915 06:44:23.326167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:23.326211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:32.839152       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:32.839189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:52.067511       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:52.067558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:44:58.820104       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:44:58.820142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:15.203809       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:15.203848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:26.521057       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:26.521105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:45:30.827371       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:30.827415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:45:42.662116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="85.883µs"
	W0915 06:45:54.506087       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:54.506128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:45:54.663536       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="107.488µs"
	W0915 06:45:54.914815       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:45:54.914852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:46:02.406138       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:46:02.406179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:46:19.009157       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:46:19.009196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:46:23.638463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="6.09µs"
	
	
	==> kube-proxy [2357c6fca01253500bc2a6e87b9d58db0494007101ae13f01dc05bc6a671763f] <==
	I0915 06:30:21.834006       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:30:23.436123       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:30:23.436244       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:30:23.828735       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:30:23.920347       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:30:24.020895       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:30:24.021810       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:30:24.021862       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:30:24.023838       1 config.go:199] "Starting service config controller"
	I0915 06:30:24.035431       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:30:24.037976       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:30:24.024321       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:30:24.038178       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:30:24.038213       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:30:24.024295       1 config.go:328] "Starting node config controller"
	I0915 06:30:24.038343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:30:24.138804       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [793a3d9d3aa847e8bfb9325cbec38ebd60f391ac4ed4147e69ab9fcc527b85b7] <==
	E0915 06:30:12.440324       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:30:12.439968       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 06:30:12.440368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 06:30:12.440396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:12.440004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:30:12.440436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:12.440042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:12.440462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.324810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:30:13.324857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.358300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:30:13.358343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.387669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:30:13.387710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.459534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:13.459576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.464687       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:30:13.464727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.561227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:30:13.561268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.591583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:30:13.591620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:30:13.632014       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:30:13.632056       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:30:16.638356       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:46:02 addons-022322 kubelet[1653]: I0915 06:46:02.094287    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd81388a-b9de-4e7d-8d84-0eefd7b1070a-script" (OuterVolumeSpecName: "script") pod "fd81388a-b9de-4e7d-8d84-0eefd7b1070a" (UID: "fd81388a-b9de-4e7d-8d84-0eefd7b1070a"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 15 06:46:02 addons-022322 kubelet[1653]: I0915 06:46:02.095812    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd81388a-b9de-4e7d-8d84-0eefd7b1070a-kube-api-access-zdj5t" (OuterVolumeSpecName: "kube-api-access-zdj5t") pod "fd81388a-b9de-4e7d-8d84-0eefd7b1070a" (UID: "fd81388a-b9de-4e7d-8d84-0eefd7b1070a"). InnerVolumeSpecName "kube-api-access-zdj5t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:46:02 addons-022322 kubelet[1653]: I0915 06:46:02.194738    1653 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/fd81388a-b9de-4e7d-8d84-0eefd7b1070a-script\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:46:02 addons-022322 kubelet[1653]: I0915 06:46:02.194776    1653 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fd81388a-b9de-4e7d-8d84-0eefd7b1070a-gcp-creds\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:46:02 addons-022322 kubelet[1653]: I0915 06:46:02.194790    1653 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/fd81388a-b9de-4e7d-8d84-0eefd7b1070a-data\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:46:02 addons-022322 kubelet[1653]: I0915 06:46:02.194802    1653 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zdj5t\" (UniqueName: \"kubernetes.io/projected/fd81388a-b9de-4e7d-8d84-0eefd7b1070a-kube-api-access-zdj5t\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:46:04 addons-022322 kubelet[1653]: I0915 06:46:04.655228    1653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd81388a-b9de-4e7d-8d84-0eefd7b1070a" path="/var/lib/kubelet/pods/fd81388a-b9de-4e7d-8d84-0eefd7b1070a/volumes"
	Sep 15 06:46:04 addons-022322 kubelet[1653]: E0915 06:46:04.917507    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382764917279652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:04 addons-022322 kubelet[1653]: E0915 06:46:04.917541    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382764917279652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:06 addons-022322 kubelet[1653]: E0915 06:46:06.654955    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0c10f95d-3440-424e-8c3e-6436ed190e0b"
	Sep 15 06:46:14 addons-022322 kubelet[1653]: E0915 06:46:14.919450    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382774919223597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:14 addons-022322 kubelet[1653]: E0915 06:46:14.919490    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382774919223597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:20 addons-022322 kubelet[1653]: E0915 06:46:20.655087    1653 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0c10f95d-3440-424e-8c3e-6436ed190e0b"
	Sep 15 06:46:24 addons-022322 kubelet[1653]: E0915 06:46:24.921811    1653 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382784921566255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:24 addons-022322 kubelet[1653]: E0915 06:46:24.921852    1653 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726382784921566255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:553862,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:46:24 addons-022322 kubelet[1653]: I0915 06:46:24.945300    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7898557-9596-4239-9fab-1fce4db35921-tmp-dir\") pod \"f7898557-9596-4239-9fab-1fce4db35921\" (UID: \"f7898557-9596-4239-9fab-1fce4db35921\") "
	Sep 15 06:46:24 addons-022322 kubelet[1653]: I0915 06:46:24.945357    1653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g645\" (UniqueName: \"kubernetes.io/projected/f7898557-9596-4239-9fab-1fce4db35921-kube-api-access-8g645\") pod \"f7898557-9596-4239-9fab-1fce4db35921\" (UID: \"f7898557-9596-4239-9fab-1fce4db35921\") "
	Sep 15 06:46:24 addons-022322 kubelet[1653]: I0915 06:46:24.945739    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7898557-9596-4239-9fab-1fce4db35921-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7898557-9596-4239-9fab-1fce4db35921" (UID: "f7898557-9596-4239-9fab-1fce4db35921"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 15 06:46:24 addons-022322 kubelet[1653]: I0915 06:46:24.947229    1653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7898557-9596-4239-9fab-1fce4db35921-kube-api-access-8g645" (OuterVolumeSpecName: "kube-api-access-8g645") pod "f7898557-9596-4239-9fab-1fce4db35921" (UID: "f7898557-9596-4239-9fab-1fce4db35921"). InnerVolumeSpecName "kube-api-access-8g645". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:46:25 addons-022322 kubelet[1653]: I0915 06:46:25.028671    1653 scope.go:117] "RemoveContainer" containerID="a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec"
	Sep 15 06:46:25 addons-022322 kubelet[1653]: I0915 06:46:25.046431    1653 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7898557-9596-4239-9fab-1fce4db35921-tmp-dir\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:46:25 addons-022322 kubelet[1653]: I0915 06:46:25.046490    1653 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8g645\" (UniqueName: \"kubernetes.io/projected/f7898557-9596-4239-9fab-1fce4db35921-kube-api-access-8g645\") on node \"addons-022322\" DevicePath \"\""
	Sep 15 06:46:25 addons-022322 kubelet[1653]: I0915 06:46:25.046869    1653 scope.go:117] "RemoveContainer" containerID="a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec"
	Sep 15 06:46:25 addons-022322 kubelet[1653]: E0915 06:46:25.047246    1653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec\": container with ID starting with a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec not found: ID does not exist" containerID="a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec"
	Sep 15 06:46:25 addons-022322 kubelet[1653]: I0915 06:46:25.047287    1653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec"} err="failed to get container status \"a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec\": rpc error: code = NotFound desc = could not find container \"a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec\": container with ID starting with a31e0f0167cc95eb2cc90a39ae9536dcfba44a945690935b145df53dffd4b5ec not found: ID does not exist"
	
	
	==> storage-provisioner [f16ac41ad768c5af72a289634ca7ed99edb67900cef177b81dd428a113bf6c28] <==
	I0915 06:31:03.471182       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:31:03.479024       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:31:03.479069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:31:03.486210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:31:03.486362       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702!
	I0915 06:31:03.486750       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bf9f02c-8c94-46e0-beae-8c5e4ea3cb36", APIVersion:"v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702 became leader
	I0915 06:31:03.587216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-022322_34316f8b-5348-44f9-9b03-41c6a755d702!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-022322 -n addons-022322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-022322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox hello-world-app-55bf9c44b4-m2kmg test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-022322 describe pod busybox hello-world-app-55bf9c44b4-m2kmg test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-022322 describe pod busybox hello-world-app-55bf9c44b4-m2kmg test-local-path:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-022322/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:32:34 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vj9bj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vj9bj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-022322
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m49s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             hello-world-app-55bf9c44b4-m2kmg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-022322/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:43:21 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:           10.244.0.30
	Controlled By:  ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zpqdq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zpqdq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m4s                 default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-m2kmg to addons-022322
	  Warning  Failed     55s (x2 over 2m29s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     55s (x2 over 2m29s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    43s (x2 over 2m28s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     43s (x2 over 2m28s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    31s (x3 over 3m4s)   kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxctw (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xxctw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (349.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (300.21s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-022322 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-022322 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-022322 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.319µs)
helpers_test.go:396: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:993: failed waiting for PVC test-pvc: context deadline exceeded
--- FAIL: TestAddons/parallel/LocalPath (300.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1a35d1ee-8af7-4558-ac57-2ba32c938d7e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003376583s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-988233 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-988233 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-988233 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-988233 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5c05705e-08b2-4fe9-924f-f55145b976f8] Pending
helpers_test.go:344: "sp-pod" [5c05705e-08b2-4fe9-924f-f55145b976f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0915 06:50:17.996812   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-988233 -n functional-988233
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-09-15 06:53:04.934069848 +0000 UTC m=+1422.070253691
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-988233 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-988233 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-988233/192.168.49.2
Start Time:       Sun, 15 Sep 2024 06:50:04 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdgxx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-rdgxx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-988233
Warning  Failed     91s               kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     91s               kubelet            Error: ErrImagePull
Normal   BackOff    91s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     91s               kubelet            Error: ImagePullBackOff
Normal   Pulling    78s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-988233 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-988233 logs sp-pod -n default: exit status 1 (59.763448ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-988233 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-988233
helpers_test.go:235: (dbg) docker inspect functional-988233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49",
	        "Created": "2024-09-15T06:47:37.440283707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:47:37.547477257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/hostname",
	        "HostsPath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/hosts",
	        "LogPath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49-json.log",
	        "Name": "/functional-988233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-988233:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-988233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e-init/diff:/var/lib/docker/overlay2/41629ade7f7315f2df14bde3ca812850a45d34be79d1a0e1cd0df4510f198eaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-988233",
	                "Source": "/var/lib/docker/volumes/functional-988233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-988233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-988233",
	                "name.minikube.sigs.k8s.io": "functional-988233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "165dd519a144a57cfed2b8ef0f77b98daafd4934b73c3e52c21cad8e6e9f3c9f",
	            "SandboxKey": "/var/run/docker/netns/165dd519a144",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-988233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2b6769285d74c8385812c5d317df12dd7f4e37bee5a33c33bba3672d8e768f27",
	                    "EndpointID": "3470fa941d9ae684b52405a617b072bcfee50eaccc5862000bda4d64f2c376cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-988233",
	                        "41cfb8d78a48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-988233 -n functional-988233
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-988233 logs -n 25: (1.374672882s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-988233 ssh findmnt                                              | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount1                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh findmnt                                              | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount2                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh findmnt                                              | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount3                                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-988233                                                       | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | --kill=true                                                                |                   |         |         |                     |                     |
	| image          | functional-988233 image load --daemon                                      | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-988233                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-988233 image load --daemon                                      | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-988233                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-988233 image save kicbase/echo-server:functional-988233         | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image rm                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-988233                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-988233 image load                                               | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh sudo                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | systemctl is-active docker                                                 |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh sudo                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | systemctl is-active containerd                                             |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh sudo cat                                             | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /etc/test/nested/copy/12591/hosts                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh pgrep                                                | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-988233 image build -t                                           | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | localhost/my-image:functional-988233                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| update-context | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:51:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:51:12.658837   57242 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:12.659116   57242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.659126   57242 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:12.659131   57242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.659311   57242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:51:12.659823   57242 out.go:352] Setting JSON to false
	I0915 06:51:12.660860   57242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2024,"bootTime":1726381049,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:51:12.660972   57242 start.go:139] virtualization: kvm guest
	I0915 06:51:12.663102   57242 out.go:177] * [functional-988233] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:51:12.664482   57242 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:51:12.664551   57242 notify.go:220] Checking for updates...
	I0915 06:51:12.667344   57242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:12.668673   57242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:51:12.669935   57242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:51:12.671092   57242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:12.672231   57242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:12.673972   57242 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:12.674649   57242 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:12.698649   57242 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:51:12.698760   57242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:12.750644   57242 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:51:12.739851903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:51:12.750744   57242 docker.go:318] overlay module found
	I0915 06:51:12.752922   57242 out.go:177] * Using the docker driver based on existing profile
	I0915 06:51:12.754063   57242 start.go:297] selected driver: docker
	I0915 06:51:12.754078   57242 start.go:901] validating driver "docker" against &{Name:functional-988233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:12.754240   57242 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:12.754341   57242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:12.806677   57242 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:51:12.797449918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:51:12.807286   57242 cni.go:84] Creating CNI manager for ""
	I0915 06:51:12.807333   57242 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:51:12.807397   57242 start.go:340] cluster config:
	{Name:functional-988233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:12.809120   57242 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 15 06:51:42 functional-988233 crio[4815]: time="2024-09-15 06:51:42.196074511Z" level=info msg="Image docker.io/kicbase/echo-server:functional-988233 not found" id=620b26e8-0cad-4491-ac8a-2eb488d8a38c name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:42 functional-988233 crio[4815]: time="2024-09-15 06:51:42.226862753Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-988233" id=18a3d176-b0c8-4856-9db5-3dd8c8c2f993 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:42 functional-988233 crio[4815]: time="2024-09-15 06:51:42.227047009Z" level=info msg="Image localhost/kicbase/echo-server:functional-988233 not found" id=18a3d176-b0c8-4856-9db5-3dd8c8c2f993 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:43 functional-988233 crio[4815]: time="2024-09-15 06:51:43.350881517Z" level=info msg="Checking image status: kicbase/echo-server:functional-988233" id=56398e63-dcdf-4425-b80c-5f06ac04c27e name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:43 functional-988233 crio[4815]: time="2024-09-15 06:51:43.382186870Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-988233" id=b5b9ca52-380b-4863-88e4-032e9bbbff3e name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:43 functional-988233 crio[4815]: time="2024-09-15 06:51:43.382428876Z" level=info msg="Image docker.io/kicbase/echo-server:functional-988233 not found" id=b5b9ca52-380b-4863-88e4-032e9bbbff3e name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:43 functional-988233 crio[4815]: time="2024-09-15 06:51:43.413245197Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-988233" id=a9ef3f42-2355-4b06-9a54-a8b80c9f20ad name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:43 functional-988233 crio[4815]: time="2024-09-15 06:51:43.413434048Z" level=info msg="Image localhost/kicbase/echo-server:functional-988233 not found" id=a9ef3f42-2355-4b06-9a54-a8b80c9f20ad name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:44 functional-988233 crio[4815]: time="2024-09-15 06:51:44.996559412Z" level=info msg="Running pod sandbox: default/mysql-6cdb49bbb-b264w/POD" id=6920fd8d-5c23-4462-810b-be00bd8d8480 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 15 06:51:44 functional-988233 crio[4815]: time="2024-09-15 06:51:44.996620352Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 15 06:51:45 functional-988233 crio[4815]: time="2024-09-15 06:51:45.010483685Z" level=info msg="Got pod network &{Name:mysql-6cdb49bbb-b264w Namespace:default ID:9315c549f534e52afce7ed97fa5e23c28ab8ca3fa34fb9a91f00e646febb8afe UID:759c931a-17f9-489f-ab67-575f4cbb603b NetNS:/var/run/netns/d8cd56d3-24a8-451f-afcd-a038c2e0d76a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:51:45 functional-988233 crio[4815]: time="2024-09-15 06:51:45.010532507Z" level=info msg="Adding pod default_mysql-6cdb49bbb-b264w to CNI network \"kindnet\" (type=ptp)"
	Sep 15 06:51:45 functional-988233 crio[4815]: time="2024-09-15 06:51:45.019070361Z" level=info msg="Got pod network &{Name:mysql-6cdb49bbb-b264w Namespace:default ID:9315c549f534e52afce7ed97fa5e23c28ab8ca3fa34fb9a91f00e646febb8afe UID:759c931a-17f9-489f-ab67-575f4cbb603b NetNS:/var/run/netns/d8cd56d3-24a8-451f-afcd-a038c2e0d76a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 15 06:51:45 functional-988233 crio[4815]: time="2024-09-15 06:51:45.019211020Z" level=info msg="Checking pod default_mysql-6cdb49bbb-b264w for CNI network kindnet (type=ptp)"
	Sep 15 06:51:45 functional-988233 crio[4815]: time="2024-09-15 06:51:45.021299948Z" level=info msg="Ran pod sandbox 9315c549f534e52afce7ed97fa5e23c28ab8ca3fa34fb9a91f00e646febb8afe with infra container: default/mysql-6cdb49bbb-b264w/POD" id=6920fd8d-5c23-4462-810b-be00bd8d8480 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 15 06:51:45 functional-988233 crio[4815]: time="2024-09-15 06:51:45.022429320Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=452b57da-4e65-422a-bacd-7f8404b61910 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:45 functional-988233 crio[4815]: time="2024-09-15 06:51:45.022646675Z" level=info msg="Image docker.io/mysql:5.7 not found" id=452b57da-4e65-422a-bacd-7f8404b61910 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:46 functional-988233 crio[4815]: time="2024-09-15 06:51:46.345926420Z" level=info msg="Checking image status: docker.io/nginx:latest" id=68cd94ea-c081-4cd0-9ac5-3a9a3a0e0627 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:51:46 functional-988233 crio[4815]: time="2024-09-15 06:51:46.346199349Z" level=info msg="Image docker.io/nginx:latest not found" id=68cd94ea-c081-4cd0-9ac5-3a9a3a0e0627 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:52:11 functional-988233 crio[4815]: time="2024-09-15 06:52:11.504805398Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=770e35a2-9404-486b-831b-dfe5a3e5d66a name=/runtime.v1.ImageService/PullImage
	Sep 15 06:52:11 functional-988233 crio[4815]: time="2024-09-15 06:52:11.523016348Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 15 06:52:26 functional-988233 crio[4815]: time="2024-09-15 06:52:26.345069162Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=2bb7d739-4116-46b6-b699-17464ac2d127 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:52:26 functional-988233 crio[4815]: time="2024-09-15 06:52:26.345295236Z" level=info msg="Image docker.io/nginx:alpine not found" id=2bb7d739-4116-46b6-b699-17464ac2d127 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:52:37 functional-988233 crio[4815]: time="2024-09-15 06:52:37.345223941Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=fd3d7c60-4f34-4861-a9b3-56a78b0f7723 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 06:52:37 functional-988233 crio[4815]: time="2024-09-15 06:52:37.345519264Z" level=info msg="Image docker.io/nginx:alpine not found" id=fd3d7c60-4f34-4861-a9b3-56a78b0f7723 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	57bed4d3ae734       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   3a177335dfcaf       dashboard-metrics-scraper-c5db448b4-lk9lm
	3388d493d47a0       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   bbed29bcd6dd8       kubernetes-dashboard-695b96c756-rdplq
	86b1c49345dd4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   fbd546b63b7e0       busybox-mount
	ed4c0d281d0a4       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   f6725f0971b11       hello-node-6b9f76b5c7-lsj6m
	966c84fddcd55       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   6d9d915c2f67f       hello-node-connect-67bdd5bbb4-89pwf
	2d6813a912c1e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     2                   8773f021832ae       coredns-7c65d6cfc9-rk5fm
	481f0c1712799       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 3 minutes ago        Running             kindnet-cni                 2                   0da1601578efa       kindnet-zhpsl
	4121a2b8b9324       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 3 minutes ago        Running             kube-proxy                  2                   342d6085dfae0       kube-proxy-95pbv
	c8fcd9acbc55f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         3                   9f3ad6db200f6       storage-provisioner
	ddb6f02e87c20       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 3 minutes ago        Running             kube-apiserver              0                   769be70480681       kube-apiserver-functional-988233
	c78356bd45056       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago        Running             etcd                        2                   1cf756032d79b       etcd-functional-988233
	992a89a63c2bf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 3 minutes ago        Running             kube-controller-manager     2                   91fa36e569ad9       kube-controller-manager-functional-988233
	a09a9c2619b21       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 3 minutes ago        Running             kube-scheduler              2                   d24f38f0d8e79       kube-scheduler-functional-988233
	91f3c99c8863a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         2                   9f3ad6db200f6       storage-provisioner
	6ab67d1177f83       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     1                   8773f021832ae       coredns-7c65d6cfc9-rk5fm
	e907d5b01c084       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago        Exited              etcd                        1                   1cf756032d79b       etcd-functional-988233
	7ca7c7cccb4ef       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 4 minutes ago        Exited              kube-scheduler              1                   d24f38f0d8e79       kube-scheduler-functional-988233
	583b9a8f09411       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 4 minutes ago        Exited              kindnet-cni                 1                   0da1601578efa       kindnet-zhpsl
	a45a492b79ddc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 4 minutes ago        Exited              kube-proxy                  1                   342d6085dfae0       kube-proxy-95pbv
	89b9667e3df56       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 4 minutes ago        Exited              kube-controller-manager     1                   91fa36e569ad9       kube-controller-manager-functional-988233
	
	
	==> coredns [2d6813a912c1e6ff4bc22ceaa8d09de1786eedf0bf05c19c19b47bce0e2a11e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54686 - 47530 "HINFO IN 1964691982139342335.3606816975868381648. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02897023s
	
	
	==> coredns [6ab67d1177f83f984381715f236bda94895da0a8d8c18e22059c04a79090914e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36249 - 56625 "HINFO IN 7174451905134525776.513574974473026608. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.101357567s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-988233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-988233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=functional-988233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_47_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:47:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-988233
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:52:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:52:09 +0000   Sun, 15 Sep 2024 06:47:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:52:09 +0000   Sun, 15 Sep 2024 06:47:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:52:09 +0000   Sun, 15 Sep 2024 06:47:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:52:09 +0000   Sun, 15 Sep 2024 06:48:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-988233
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7dfdec46a7964646a7e1d2b20b49794b
	  System UUID:                4ae1499b-e27e-4343-b06d-678c48bc012c
	  Boot ID:                    d7eb9d55-e244-423e-b0bb-fd0ad06c12bb
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-lsj6m                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-node-connect-67bdd5bbb4-89pwf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     mysql-6cdb49bbb-b264w                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     82s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-rk5fm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m11s
	  kube-system                 etcd-functional-988233                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m16s
	  kube-system                 kindnet-zhpsl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m11s
	  kube-system                 kube-apiserver-functional-988233             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kube-controller-manager-functional-988233    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-proxy-95pbv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-scheduler-functional-988233             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-lk9lm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-rdplq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m9s                   kube-proxy       
	  Normal   Starting                 3m29s                  kube-proxy       
	  Normal   Starting                 4m14s                  kube-proxy       
	  Warning  CgroupV1                 5m17s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m17s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m16s                  kubelet          Node functional-988233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m16s                  kubelet          Node functional-988233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m16s                  kubelet          Node functional-988233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m12s                  node-controller  Node functional-988233 event: Registered Node functional-988233 in Controller
	  Normal   NodeReady                4m30s                  kubelet          Node functional-988233 status is now: NodeReady
	  Normal   RegisteredNode           4m13s                  node-controller  Node functional-988233 event: Registered Node functional-988233 in Controller
	  Normal   NodeHasSufficientMemory  3m34s (x8 over 3m34s)  kubelet          Node functional-988233 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m34s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m34s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    3m34s (x8 over 3m34s)  kubelet          Node functional-988233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m34s (x7 over 3m34s)  kubelet          Node functional-988233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m27s                  node-controller  Node functional-988233 event: Registered Node functional-988233 in Controller
	
	
	==> dmesg <==
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.600975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.568733] kauditd_printk_skb: 46 callbacks suppressed
	[Sep15 06:41] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +1.004271] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +2.015809] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +4.127715] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +8.191377] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[ +16.126848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[Sep15 06:42] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[Sep15 06:51] FS-Cache: Duplicate cookie detected
	[  +0.004727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006738] FS-Cache: O-cookie d=000000000b7cd976{9P.session} n=0000000071b4f7d6
	[  +0.007524] FS-Cache: O-key=[10] '34323935343034373731'
	[  +0.005409] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.008046] FS-Cache: N-cookie d=000000000b7cd976{9P.session} n=000000002f47d984
	[  +0.008988] FS-Cache: N-key=[10] '34323935343034373731'
	[  +7.987909] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [c78356bd450563c4b24c613a21ae445158c1445a3267f8ab291f7a2bc46a22aa] <==
	{"level":"info","ts":"2024-09-15T06:49:33.243175Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-15T06:49:33.243303Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:49:33.243339Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:49:33.243442Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:33.246460Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-15T06:49:33.246586Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:33.246616Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:33.246854Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T06:49:33.246931Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T06:49:34.733056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:34.733112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:34.733160Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:34.733175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.733181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.733190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.733205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.734317Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-988233 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:49:34.734339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:49:34.734333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:49:34.734518Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:49:34.734538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:49:34.735532Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:34.735538Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:34.737056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:49:34.737154Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [e907d5b01c0844c5e53aae3914efddffc4a2da35305cd5420deb9ca2b3475a85] <==
	{"level":"info","ts":"2024-09-15T06:48:50.147822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T06:48:50.147856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:48:50.147872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.147881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.147904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.147922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.148899Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-988233 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:48:50.148926Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:48:50.148944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:48:50.149145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:48:50.149180Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:48:50.149867Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:48:50.150200Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:48:50.150679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:48:50.150962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:49:15.450932Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-15T06:49:15.451035Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-988233","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-15T06:49:15.451130Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:49:15.451263Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:49:15.462544Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:49:15.462602Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T06:49:15.464074Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-15T06:49:15.466722Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:15.466827Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:15.466842Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-988233","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 06:53:06 up 35 min,  0 users,  load average: 0.28, 0.37, 0.37
	Linux functional-988233 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [481f0c17127996675ca77ca1804141eddaefcbe530512e0b438d0824634bea42] <==
	I0915 06:50:57.356328       1 main.go:299] handling current node
	I0915 06:51:07.356320       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:51:07.356361       1 main.go:299] handling current node
	I0915 06:51:17.353026       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:51:17.353059       1 main.go:299] handling current node
	I0915 06:51:27.360276       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:51:27.360319       1 main.go:299] handling current node
	I0915 06:51:37.353600       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:51:37.353639       1 main.go:299] handling current node
	I0915 06:51:47.353346       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:51:47.353384       1 main.go:299] handling current node
	I0915 06:51:57.362425       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:51:57.362459       1 main.go:299] handling current node
	I0915 06:52:07.360933       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:07.360965       1 main.go:299] handling current node
	I0915 06:52:17.353037       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:17.353068       1 main.go:299] handling current node
	I0915 06:52:27.353353       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:27.353408       1 main.go:299] handling current node
	I0915 06:52:37.353423       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:37.353457       1 main.go:299] handling current node
	I0915 06:52:47.360288       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:47.360332       1 main.go:299] handling current node
	I0915 06:52:57.362381       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:52:57.362416       1 main.go:299] handling current node
	
	
	==> kindnet [583b9a8f0941166da0b5e85a1ef3e551c87272617cd3792995b6c9517a084c0a] <==
	I0915 06:48:48.626360       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0915 06:48:48.626850       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0915 06:48:48.627047       1 main.go:148] setting mtu 1500 for CNI 
	I0915 06:48:48.627095       1 main.go:178] kindnetd IP family: "ipv4"
	I0915 06:48:48.627128       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0915 06:48:49.045381       1 controller.go:334] Starting controller kube-network-policies
	I0915 06:48:49.045471       1 controller.go:338] Waiting for informer caches to sync
	I0915 06:48:49.045498       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0915 06:48:51.423739       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0915 06:48:51.423850       1 metrics.go:61] Registering metrics
	I0915 06:48:51.423937       1 controller.go:374] Syncing nftables rules
	I0915 06:48:59.046044       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:48:59.046112       1 main.go:299] handling current node
	I0915 06:49:09.047261       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:49:09.047301       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ddb6f02e87c2003f2ed74624b00ab8c6d02bf0ac852049c3fe464f2f7e17b01a] <==
	I0915 06:49:35.761174       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 06:49:35.820440       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 06:49:35.820525       1 aggregator.go:171] initial CRD sync complete...
	I0915 06:49:35.820537       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 06:49:35.820545       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 06:49:35.820551       1 cache.go:39] Caches are synced for autoregister controller
	I0915 06:49:35.820599       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0915 06:49:35.826131       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0915 06:49:36.656515       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 06:49:37.754636       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 06:49:37.869914       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 06:49:37.879328       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 06:49:37.926821       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 06:49:37.931949       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 06:49:53.832616       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.235.34"}
	I0915 06:49:53.840814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 06:49:53.840823       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 06:49:58.502630       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.161.12"}
	I0915 06:49:59.314822       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0915 06:49:59.397019       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.233.212"}
	I0915 06:49:59.511774       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.53.235"}
	I0915 06:51:13.736293       1 controller.go:615] quota admission added evaluator for: namespaces
	I0915 06:51:13.946553       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.11.205"}
	I0915 06:51:13.959159       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.208.35"}
	I0915 06:51:44.644135       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.100.88"}
	
	
	==> kube-controller-manager [89b9667e3df56b5f5d6ddf49b4347760baae65ef8111e1fc806e4881304dcfbe] <==
	I0915 06:48:53.694997       1 shared_informer.go:320] Caches are synced for daemon sets
	I0915 06:48:53.695018       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0915 06:48:53.694961       1 shared_informer.go:320] Caches are synced for job
	I0915 06:48:53.695030       1 shared_informer.go:320] Caches are synced for endpoint
	I0915 06:48:53.695067       1 shared_informer.go:320] Caches are synced for PVC protection
	I0915 06:48:53.695080       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-988233"
	I0915 06:48:53.695071       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0915 06:48:53.695074       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0915 06:48:53.695212       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0915 06:48:53.698265       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0915 06:48:53.699975       1 shared_informer.go:320] Caches are synced for stateful set
	I0915 06:48:53.744588       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0915 06:48:53.749846       1 shared_informer.go:320] Caches are synced for namespace
	I0915 06:48:53.775696       1 shared_informer.go:320] Caches are synced for cronjob
	I0915 06:48:53.794793       1 shared_informer.go:320] Caches are synced for service account
	I0915 06:48:53.803526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="158.227925ms"
	I0915 06:48:53.803739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.13µs"
	I0915 06:48:53.855645       1 shared_informer.go:320] Caches are synced for HPA
	I0915 06:48:53.892122       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:48:53.898627       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:48:54.310454       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:48:54.360817       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:48:54.360850       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0915 06:48:56.952291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.098541ms"
	I0915 06:48:56.952408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.35µs"
	
	
	==> kube-controller-manager [992a89a63c2bfe77f67b4968e1bc9e2c8e43351fb52fa34745713e296461096c] <==
	I0915 06:51:13.788661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.746156ms"
	E0915 06:51:13.788690       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 06:51:13.820667       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="32.63664ms"
	E0915 06:51:13.820705       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 06:51:13.825101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.737803ms"
	E0915 06:51:13.825134       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 06:51:13.827911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.145664ms"
	E0915 06:51:13.827942       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 06:51:13.841393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.154388ms"
	I0915 06:51:13.931287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="89.843254ms"
	I0915 06:51:13.931403       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="74.846µs"
	I0915 06:51:13.937650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.21897ms"
	I0915 06:51:13.950927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.232629ms"
	I0915 06:51:13.951030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="58.425µs"
	I0915 06:51:13.951120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="27.272µs"
	I0915 06:51:38.331511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-988233"
	I0915 06:51:39.743770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.753984ms"
	I0915 06:51:39.743877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="66.192µs"
	I0915 06:51:41.750731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="8.620333ms"
	I0915 06:51:41.750829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="58.4µs"
	I0915 06:51:44.696043       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="15.330083ms"
	I0915 06:51:44.701163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="5.075785ms"
	I0915 06:51:44.701254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="48.327µs"
	I0915 06:51:44.702025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="57.447µs"
	I0915 06:52:09.090575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-988233"
	
	
	==> kube-proxy [4121a2b8b932472034d1bc8b1cd95546f34d3e8a03d8db7b74e270f1f480da04] <==
	I0915 06:49:36.938818       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:49:37.079836       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:49:37.079908       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:49:37.136509       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:49:37.136580       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:49:37.138402       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:49:37.138767       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:49:37.138798       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:49:37.140128       1 config.go:199] "Starting service config controller"
	I0915 06:49:37.140176       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:49:37.140218       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:49:37.140224       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:49:37.140876       1 config.go:328] "Starting node config controller"
	I0915 06:49:37.140954       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:49:37.241403       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:49:37.241409       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:49:37.241457       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a45a492b79ddc757624100cdbdf6b431835e9804598ea6d7f6dca1445b286574] <==
	I0915 06:48:48.637679       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:48:51.332972       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:48:51.333183       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:48:51.535507       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:48:51.535658       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:48:51.541720       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:48:51.542142       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:48:51.542182       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:48:51.543233       1 config.go:199] "Starting service config controller"
	I0915 06:48:51.543273       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:48:51.543294       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:48:51.543301       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:48:51.543845       1 config.go:328] "Starting node config controller"
	I0915 06:48:51.543868       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:48:51.644269       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:48:51.644292       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:48:51.644307       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7ca7c7cccb4efd45e80019ea017bb337d062924034af15a4074def83557b1840] <==
	I0915 06:48:49.637406       1 serving.go:386] Generated self-signed cert in-memory
	I0915 06:48:51.427150       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 06:48:51.427252       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:48:51.432872       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0915 06:48:51.432907       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 06:48:51.432914       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0915 06:48:51.432980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0915 06:48:51.433000       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:48:51.432929       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:48:51.433680       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 06:48:51.433760       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 06:48:51.533742       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:48:51.533790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:48:51.533745       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	E0915 06:49:15.450626       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a09a9c2619b217bf22a942008e5719652a4c95aeeb69b4c64cf31efab26415f0] <==
	I0915 06:49:33.947883       1 serving.go:386] Generated self-signed cert in-memory
	I0915 06:49:35.742660       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 06:49:35.742684       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:49:35.746341       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0915 06:49:35.746356       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 06:49:35.746377       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0915 06:49:35.746378       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:49:35.746378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0915 06:49:35.746395       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:49:35.746616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 06:49:35.747456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 06:49:35.846928       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:49:35.846954       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:49:35.846938       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Sep 15 06:51:42 functional-988233 kubelet[5178]: E0915 06:51:42.462792    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383102462603946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:200996,},InodesUsed:&UInt64Value{Value:100,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:51:44 functional-988233 kubelet[5178]: E0915 06:51:44.695181    5178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9ba38985-c910-4b42-9164-f0a898a058fb" containerName="mount-munger"
	Sep 15 06:51:44 functional-988233 kubelet[5178]: I0915 06:51:44.695275    5178 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba38985-c910-4b42-9164-f0a898a058fb" containerName="mount-munger"
	Sep 15 06:51:44 functional-988233 kubelet[5178]: I0915 06:51:44.817492    5178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpx56\" (UniqueName: \"kubernetes.io/projected/759c931a-17f9-489f-ab67-575f4cbb603b-kube-api-access-hpx56\") pod \"mysql-6cdb49bbb-b264w\" (UID: \"759c931a-17f9-489f-ab67-575f4cbb603b\") " pod="default/mysql-6cdb49bbb-b264w"
	Sep 15 06:51:52 functional-988233 kubelet[5178]: E0915 06:51:52.464588    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383112464410666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:51:52 functional-988233 kubelet[5178]: E0915 06:51:52.464646    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383112464410666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:02 functional-988233 kubelet[5178]: E0915 06:52:02.466045    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383122465873227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:02 functional-988233 kubelet[5178]: E0915 06:52:02.466086    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383122465873227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:11 functional-988233 kubelet[5178]: E0915 06:52:11.504350    5178 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 15 06:52:11 functional-988233 kubelet[5178]: E0915 06:52:11.504421    5178 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 15 06:52:11 functional-988233 kubelet[5178]: E0915 06:52:11.504667    5178 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v2p6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(17288952-bc71-4475-95b5-3b5ceb1e6ca7): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 15 06:52:11 functional-988233 kubelet[5178]: E0915 06:52:11.505948    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="17288952-bc71-4475-95b5-3b5ceb1e6ca7"
	Sep 15 06:52:12 functional-988233 kubelet[5178]: E0915 06:52:12.467372    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383132467183181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:12 functional-988233 kubelet[5178]: E0915 06:52:12.467413    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383132467183181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:22 functional-988233 kubelet[5178]: E0915 06:52:22.468904    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383142468718561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:22 functional-988233 kubelet[5178]: E0915 06:52:22.468945    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383142468718561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:26 functional-988233 kubelet[5178]: E0915 06:52:26.345530    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="17288952-bc71-4475-95b5-3b5ceb1e6ca7"
	Sep 15 06:52:32 functional-988233 kubelet[5178]: E0915 06:52:32.470343    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383152470171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:32 functional-988233 kubelet[5178]: E0915 06:52:32.470386    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383152470171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:42 functional-988233 kubelet[5178]: E0915 06:52:42.471850    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383162471677970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:42 functional-988233 kubelet[5178]: E0915 06:52:42.471889    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383162471677970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:52 functional-988233 kubelet[5178]: E0915 06:52:52.473293    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383172473142724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:52:52 functional-988233 kubelet[5178]: E0915 06:52:52.473335    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383172473142724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:53:02 functional-988233 kubelet[5178]: E0915 06:53:02.474769    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383182474602730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 06:53:02 functional-988233 kubelet[5178]: E0915 06:53:02.474817    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383182474602730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [3388d493d47a038b791c2922ded0409cd3571367901f43b52867256c239523fb] <==
	2024/09/15 06:51:38 Using namespace: kubernetes-dashboard
	2024/09/15 06:51:38 Using in-cluster config to connect to apiserver
	2024/09/15 06:51:38 Using secret token for csrf signing
	2024/09/15 06:51:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/15 06:51:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/15 06:51:38 Successful initial request to the apiserver, version: v1.31.1
	2024/09/15 06:51:38 Generating JWE encryption key
	2024/09/15 06:51:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/15 06:51:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/15 06:51:39 Initializing JWE encryption key from synchronized object
	2024/09/15 06:51:39 Creating in-cluster Sidecar client
	2024/09/15 06:51:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/15 06:51:39 Serving insecurely on HTTP port: 9090
	2024/09/15 06:52:09 Successful request to sidecar
	2024/09/15 06:51:38 Starting overwatch
	
	
	==> storage-provisioner [91f3c99c8863a42d812cac899056f76ee39ebd50467451d7ebf6c94927730e07] <==
	I0915 06:49:00.989050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:49:00.995575       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:49:00.995619       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [c8fcd9acbc55f6e73b9639cd9447905e27e6f6b5bae721c7fc13d4651636c728] <==
	I0915 06:49:36.832493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:49:36.842853       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:49:36.842982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:49:54.239435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:49:54.239559       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1fadc95c-a30d-48ca-a487-bec3dd4dfae8", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-988233_14686427-d552-4c33-b4a8-8e679a30644a became leader
	I0915 06:49:54.239678       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-988233_14686427-d552-4c33-b4a8-8e679a30644a!
	I0915 06:49:54.339997       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-988233_14686427-d552-4c33-b4a8-8e679a30644a!
	I0915 06:50:04.492108       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0915 06:50:04.492277       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a7fc3dd8-d801-4bcc-ab44-39ef614cf55b", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0915 06:50:04.492185       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ed5fd836-deb2-43ee-813f-880e4f3d8295 343 0 2024-09-15 06:47:55 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-15 06:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  a7fc3dd8-d801-4bcc-ab44-39ef614cf55b 690 0 2024-09-15 06:50:04 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-15 06:50:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-15 06:50:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0915 06:50:04.492619       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b" provisioned
	I0915 06:50:04.492646       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0915 06:50:04.492653       1 volume_store.go:212] Trying to save persistentvolume "pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b"
	I0915 06:50:04.501181       1 volume_store.go:219] persistentvolume "pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b" saved
	I0915 06:50:04.501275       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a7fc3dd8-d801-4bcc-ab44-39ef614cf55b", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-988233 -n functional-988233
helpers_test.go:261: (dbg) Run:  kubectl --context functional-988233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-b264w nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-988233 describe pod busybox-mount mysql-6cdb49bbb-b264w nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-988233 describe pod busybox-mount mysql-6cdb49bbb-b264w nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:51:12 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://86b1c49345dd4e67e996c14369949d1b19798ea772f28a79fdc74a0ea0da0e4a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 15 Sep 2024 06:51:34 +0000
	      Finished:     Sun, 15 Sep 2024 06:51:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgg9c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vgg9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  114s  default-scheduler  Successfully assigned default/busybox-mount to functional-988233
	  Normal  Pulling    115s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     93s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 857ms (21.719s including waiting). Image size: 4631262 bytes.
	  Normal  Created    93s   kubelet            Created container mount-munger
	  Normal  Started    93s   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-b264w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:51:44 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hpx56 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hpx56:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  82s   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-b264w to functional-988233
	  Normal  Pulling    82s   kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:49:58 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v2p6l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v2p6l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m8s                default-scheduler  Successfully assigned default/nginx-svc to functional-988233
	  Warning  Failed     2m7s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     56s (x2 over 2m7s)  kubelet            Error: ErrImagePull
	  Warning  Failed     56s                 kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    41s (x2 over 2m7s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     41s (x2 over 2m7s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    30s (x3 over 3m9s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:50:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdgxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-rdgxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-988233
	  Warning  Failed     94s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     94s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    94s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     94s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    81s (x2 over 3m3s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-988233 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-b264w" [759c931a-17f9-489f-ab67-575f4cbb603b] Pending
2024/09/15 06:51:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-6cdb49bbb-b264w" [759c931a-17f9-489f-ab67-575f4cbb603b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-988233 -n functional-988233
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-09-15 07:01:44.962297599 +0000 UTC m=+1942.098481446
functional_test.go:1799: (dbg) Run:  kubectl --context functional-988233 describe po mysql-6cdb49bbb-b264w -n default
functional_test.go:1799: (dbg) kubectl --context functional-988233 describe po mysql-6cdb49bbb-b264w -n default:
Name:             mysql-6cdb49bbb-b264w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-988233/192.168.49.2
Start Time:       Sun, 15 Sep 2024 06:51:44 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hpx56 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hpx56:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-b264w to functional-988233
Normal   Pulling    3m39s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     2m25s (x4 over 8m33s)  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m25s (x4 over 8m33s)  kubelet            Error: ErrImagePull
Normal   BackOff    117s (x7 over 8m33s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     117s (x7 over 8m33s)   kubelet            Error: ImagePullBackOff
functional_test.go:1799: (dbg) Run:  kubectl --context functional-988233 logs mysql-6cdb49bbb-b264w -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-988233 logs mysql-6cdb49bbb-b264w -n default: exit status 1 (67.237495ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-b264w" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-988233 logs mysql-6cdb49bbb-b264w -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-988233
helpers_test.go:235: (dbg) docker inspect functional-988233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49",
	        "Created": "2024-09-15T06:47:37.440283707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:47:37.547477257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/hostname",
	        "HostsPath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/hosts",
	        "LogPath": "/var/lib/docker/containers/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49/41cfb8d78a48d23886f92b89657e03bc74416e5d390f7a9d1c707e24d124dd49-json.log",
	        "Name": "/functional-988233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-988233:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-988233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e-init/diff:/var/lib/docker/overlay2/41629ade7f7315f2df14bde3ca812850a45d34be79d1a0e1cd0df4510f198eaa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae362ada78c2dc40cd6b50dde5cf008e1eb7e6edbc3d5b300ec167a74acb1a7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-988233",
	                "Source": "/var/lib/docker/volumes/functional-988233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-988233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-988233",
	                "name.minikube.sigs.k8s.io": "functional-988233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "165dd519a144a57cfed2b8ef0f77b98daafd4934b73c3e52c21cad8e6e9f3c9f",
	            "SandboxKey": "/var/run/docker/netns/165dd519a144",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-988233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2b6769285d74c8385812c5d317df12dd7f4e37bee5a33c33bba3672d8e768f27",
	                    "EndpointID": "3470fa941d9ae684b52405a617b072bcfee50eaccc5862000bda4d64f2c376cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-988233",
	                        "41cfb8d78a48"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-988233 -n functional-988233
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-988233 logs -n 25: (1.365377284s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-988233 ssh findmnt                                              | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount1                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh findmnt                                              | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount2                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh findmnt                                              | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | -T /mount3                                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-988233                                                       | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | --kill=true                                                                |                   |         |         |                     |                     |
	| image          | functional-988233 image load --daemon                                      | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-988233                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-988233 image load --daemon                                      | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-988233                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-988233 image save kicbase/echo-server:functional-988233         | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image rm                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | kicbase/echo-server:functional-988233                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| image          | functional-988233 image load                                               | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh sudo                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | systemctl is-active docker                                                 |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh sudo                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | systemctl is-active containerd                                             |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh sudo cat                                             | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | /etc/test/nested/copy/12591/hosts                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-988233 ssh pgrep                                                | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-988233 image build -t                                           | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | localhost/my-image:functional-988233                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-988233 image ls                                                 | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| update-context | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-988233                                                          | functional-988233 | jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:51:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:51:12.658837   57242 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:12.659116   57242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.659126   57242 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:12.659131   57242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.659311   57242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:51:12.659823   57242 out.go:352] Setting JSON to false
	I0915 06:51:12.660860   57242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2024,"bootTime":1726381049,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:51:12.660972   57242 start.go:139] virtualization: kvm guest
	I0915 06:51:12.663102   57242 out.go:177] * [functional-988233] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:51:12.664482   57242 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:51:12.664551   57242 notify.go:220] Checking for updates...
	I0915 06:51:12.667344   57242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:12.668673   57242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:51:12.669935   57242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:51:12.671092   57242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:12.672231   57242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:12.673972   57242 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:12.674649   57242 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:12.698649   57242 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:51:12.698760   57242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:12.750644   57242 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:51:12.739851903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:51:12.750744   57242 docker.go:318] overlay module found
	I0915 06:51:12.752922   57242 out.go:177] * Using the docker driver based on existing profile
	I0915 06:51:12.754063   57242 start.go:297] selected driver: docker
	I0915 06:51:12.754078   57242 start.go:901] validating driver "docker" against &{Name:functional-988233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:12.754240   57242 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:12.754341   57242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:12.806677   57242 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:51:12.797449918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:51:12.807286   57242 cni.go:84] Creating CNI manager for ""
	I0915 06:51:12.807333   57242 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0915 06:51:12.807397   57242 start.go:340] cluster config:
	{Name:functional-988233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:12.809120   57242 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 15 07:00:35 functional-988233 crio[4815]: time="2024-09-15 07:00:35.345373023Z" level=info msg="Image docker.io/nginx:alpine not found" id=18865fc9-61ad-456b-b2ea-7b747c93f4e8 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:38 functional-988233 crio[4815]: time="2024-09-15 07:00:38.345425520Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=23cc49c5-47cf-428b-938a-d05a5ff2dd0e name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:38 functional-988233 crio[4815]: time="2024-09-15 07:00:38.345693989Z" level=info msg="Image docker.io/mysql:5.7 not found" id=23cc49c5-47cf-428b-938a-d05a5ff2dd0e name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:43 functional-988233 crio[4815]: time="2024-09-15 07:00:43.345117067Z" level=info msg="Checking image status: docker.io/nginx:latest" id=550e5064-6424-474f-a933-217a3cb28b6b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:43 functional-988233 crio[4815]: time="2024-09-15 07:00:43.345317779Z" level=info msg="Image docker.io/nginx:latest not found" id=550e5064-6424-474f-a933-217a3cb28b6b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:49 functional-988233 crio[4815]: time="2024-09-15 07:00:49.345441281Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b1dd4425-f999-40e0-8434-309a33c46aa2 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:49 functional-988233 crio[4815]: time="2024-09-15 07:00:49.345661816Z" level=info msg="Image docker.io/nginx:alpine not found" id=b1dd4425-f999-40e0-8434-309a33c46aa2 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:52 functional-988233 crio[4815]: time="2024-09-15 07:00:52.345535296Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=b2b6a9d6-f48c-4f24-82f1-9c8769819484 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:52 functional-988233 crio[4815]: time="2024-09-15 07:00:52.345798187Z" level=info msg="Image docker.io/mysql:5.7 not found" id=b2b6a9d6-f48c-4f24-82f1-9c8769819484 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:52 functional-988233 crio[4815]: time="2024-09-15 07:00:52.346312622Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=a0b2fe03-b35e-4f5e-bddb-1e10e3db0844 name=/runtime.v1.ImageService/PullImage
	Sep 15 07:00:52 functional-988233 crio[4815]: time="2024-09-15 07:00:52.364287168Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Sep 15 07:00:58 functional-988233 crio[4815]: time="2024-09-15 07:00:58.345515449Z" level=info msg="Checking image status: docker.io/nginx:latest" id=8bee4065-91dd-46cc-abbd-2430f8482744 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:00:58 functional-988233 crio[4815]: time="2024-09-15 07:00:58.345796074Z" level=info msg="Image docker.io/nginx:latest not found" id=8bee4065-91dd-46cc-abbd-2430f8482744 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:00 functional-988233 crio[4815]: time="2024-09-15 07:01:00.345688187Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b32a4c31-4dfb-478c-8b87-c2f5e8dae397 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:00 functional-988233 crio[4815]: time="2024-09-15 07:01:00.345960556Z" level=info msg="Image docker.io/nginx:alpine not found" id=b32a4c31-4dfb-478c-8b87-c2f5e8dae397 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:11 functional-988233 crio[4815]: time="2024-09-15 07:01:11.344941629Z" level=info msg="Checking image status: docker.io/nginx:latest" id=9ba3a291-448a-4396-8d2b-1d07d7ef8b5f name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:11 functional-988233 crio[4815]: time="2024-09-15 07:01:11.345220673Z" level=info msg="Image docker.io/nginx:latest not found" id=9ba3a291-448a-4396-8d2b-1d07d7ef8b5f name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:22 functional-988233 crio[4815]: time="2024-09-15 07:01:22.345622295Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c1df2bfd-558c-4e51-8aa1-c6b6c9ec795b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:22 functional-988233 crio[4815]: time="2024-09-15 07:01:22.346109137Z" level=info msg="Image docker.io/nginx:latest not found" id=c1df2bfd-558c-4e51-8aa1-c6b6c9ec795b name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:23 functional-988233 crio[4815]: time="2024-09-15 07:01:23.181020074Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=7a351797-03a5-4181-bdd7-aee543af6cab name=/runtime.v1.ImageService/PullImage
	Sep 15 07:01:23 functional-988233 crio[4815]: time="2024-09-15 07:01:23.182264000Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 15 07:01:34 functional-988233 crio[4815]: time="2024-09-15 07:01:34.345568667Z" level=info msg="Checking image status: docker.io/nginx:latest" id=b66913ca-6e8d-4e00-b97e-0c156f617660 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:34 functional-988233 crio[4815]: time="2024-09-15 07:01:34.345844613Z" level=info msg="Image docker.io/nginx:latest not found" id=b66913ca-6e8d-4e00-b97e-0c156f617660 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:37 functional-988233 crio[4815]: time="2024-09-15 07:01:37.345382186Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=96c90f5f-4247-4aab-b1aa-c7de6223ac38 name=/runtime.v1.ImageService/ImageStatus
	Sep 15 07:01:37 functional-988233 crio[4815]: time="2024-09-15 07:01:37.345636025Z" level=info msg="Image docker.io/mysql:5.7 not found" id=96c90f5f-4247-4aab-b1aa-c7de6223ac38 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	57bed4d3ae734       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   3a177335dfcaf       dashboard-metrics-scraper-c5db448b4-lk9lm
	3388d493d47a0       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   bbed29bcd6dd8       kubernetes-dashboard-695b96c756-rdplq
	86b1c49345dd4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   fbd546b63b7e0       busybox-mount
	ed4c0d281d0a4       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   f6725f0971b11       hello-node-6b9f76b5c7-lsj6m
	966c84fddcd55       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   6d9d915c2f67f       hello-node-connect-67bdd5bbb4-89pwf
	2d6813a912c1e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 12 minutes ago      Running             coredns                     2                   8773f021832ae       coredns-7c65d6cfc9-rk5fm
	481f0c1712799       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 12 minutes ago      Running             kindnet-cni                 2                   0da1601578efa       kindnet-zhpsl
	4121a2b8b9324       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 12 minutes ago      Running             kube-proxy                  2                   342d6085dfae0       kube-proxy-95pbv
	c8fcd9acbc55f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Running             storage-provisioner         3                   9f3ad6db200f6       storage-provisioner
	ddb6f02e87c20       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 12 minutes ago      Running             kube-apiserver              0                   769be70480681       kube-apiserver-functional-988233
	c78356bd45056       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 12 minutes ago      Running             etcd                        2                   1cf756032d79b       etcd-functional-988233
	992a89a63c2bf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 12 minutes ago      Running             kube-controller-manager     2                   91fa36e569ad9       kube-controller-manager-functional-988233
	a09a9c2619b21       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 12 minutes ago      Running             kube-scheduler              2                   d24f38f0d8e79       kube-scheduler-functional-988233
	91f3c99c8863a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Exited              storage-provisioner         2                   9f3ad6db200f6       storage-provisioner
	6ab67d1177f83       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 12 minutes ago      Exited              coredns                     1                   8773f021832ae       coredns-7c65d6cfc9-rk5fm
	e907d5b01c084       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 12 minutes ago      Exited              etcd                        1                   1cf756032d79b       etcd-functional-988233
	7ca7c7cccb4ef       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 12 minutes ago      Exited              kube-scheduler              1                   d24f38f0d8e79       kube-scheduler-functional-988233
	583b9a8f09411       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 12 minutes ago      Exited              kindnet-cni                 1                   0da1601578efa       kindnet-zhpsl
	a45a492b79ddc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 12 minutes ago      Exited              kube-proxy                  1                   342d6085dfae0       kube-proxy-95pbv
	89b9667e3df56       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 12 minutes ago      Exited              kube-controller-manager     1                   91fa36e569ad9       kube-controller-manager-functional-988233
	
	
	==> coredns [2d6813a912c1e6ff4bc22ceaa8d09de1786eedf0bf05c19c19b47bce0e2a11e7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54686 - 47530 "HINFO IN 1964691982139342335.3606816975868381648. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02897023s
	
	
	==> coredns [6ab67d1177f83f984381715f236bda94895da0a8d8c18e22059c04a79090914e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36249 - 56625 "HINFO IN 7174451905134525776.513574974473026608. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.101357567s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-988233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-988233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=functional-988233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_47_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:47:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-988233
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 07:01:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:57:15 +0000   Sun, 15 Sep 2024 06:47:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:57:15 +0000   Sun, 15 Sep 2024 06:47:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:57:15 +0000   Sun, 15 Sep 2024 06:47:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:57:15 +0000   Sun, 15 Sep 2024 06:48:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-988233
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7dfdec46a7964646a7e1d2b20b49794b
	  System UUID:                4ae1499b-e27e-4343-b06d-678c48bc012c
	  Boot ID:                    d7eb9d55-e244-423e-b0bb-fd0ad06c12bb
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-lsj6m                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-67bdd5bbb4-89pwf          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-6cdb49bbb-b264w                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-rk5fm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-functional-988233                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-zhpsl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-988233             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-functional-988233    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-95pbv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-988233             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-lk9lm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-rdplq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-988233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-988233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-988233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node functional-988233 event: Registered Node functional-988233 in Controller
	  Normal   NodeReady                13m                kubelet          Node functional-988233 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-988233 event: Registered Node functional-988233 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-988233 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-988233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-988233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-988233 event: Registered Node functional-988233 in Controller
	
	
	==> dmesg <==
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.600975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.568733] kauditd_printk_skb: 46 callbacks suppressed
	[Sep15 06:41] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +1.004271] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +2.015809] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +4.127715] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[  +8.191377] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[ +16.126848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[Sep15 06:42] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6e 6d 4c f2 3c 5e c6 00 73 b4 2e 24 08 00
	[Sep15 06:51] FS-Cache: Duplicate cookie detected
	[  +0.004727] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006738] FS-Cache: O-cookie d=000000000b7cd976{9P.session} n=0000000071b4f7d6
	[  +0.007524] FS-Cache: O-key=[10] '34323935343034373731'
	[  +0.005409] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.008046] FS-Cache: N-cookie d=000000000b7cd976{9P.session} n=000000002f47d984
	[  +0.008988] FS-Cache: N-key=[10] '34323935343034373731'
	[  +7.987909] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [c78356bd450563c4b24c613a21ae445158c1445a3267f8ab291f7a2bc46a22aa] <==
	{"level":"info","ts":"2024-09-15T06:49:33.243442Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:33.246460Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-15T06:49:33.246586Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:33.246616Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:33.246854Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T06:49:33.246931Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T06:49:34.733056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:34.733112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:34.733160Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-15T06:49:34.733175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.733181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.733190Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.733205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-15T06:49:34.734317Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-988233 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:49:34.734339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:49:34.734333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:49:34.734518Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:49:34.734538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:49:34.735532Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:34.735538Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:49:34.737056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:49:34.737154Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:59:34.753935Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1086}
	{"level":"info","ts":"2024-09-15T06:59:34.774487Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1086,"took":"20.198914ms","hash":1547945592,"current-db-size-bytes":4030464,"current-db-size":"4.0 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-15T06:59:34.774551Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1547945592,"revision":1086,"compact-revision":-1}
	
	
	==> etcd [e907d5b01c0844c5e53aae3914efddffc4a2da35305cd5420deb9ca2b3475a85] <==
	{"level":"info","ts":"2024-09-15T06:48:50.147822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T06:48:50.147856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:48:50.147872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.147881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.147904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.147922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-15T06:48:50.148899Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-988233 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:48:50.148926Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:48:50.148944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:48:50.149145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:48:50.149180Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:48:50.149867Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:48:50.150200Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:48:50.150679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:48:50.150962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:49:15.450932Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-15T06:49:15.451035Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-988233","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-15T06:49:15.451130Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:49:15.451263Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:49:15.462544Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T06:49:15.462602Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T06:49:15.464074Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-15T06:49:15.466722Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:15.466827Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:49:15.466842Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-988233","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 07:01:46 up 44 min,  0 users,  load average: 0.01, 0.11, 0.23
	Linux functional-988233 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [481f0c17127996675ca77ca1804141eddaefcbe530512e0b438d0824634bea42] <==
	I0915 06:59:37.353191       1 main.go:299] handling current node
	I0915 06:59:47.359735       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:59:47.359770       1 main.go:299] handling current node
	I0915 06:59:57.356279       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:59:57.356320       1 main.go:299] handling current node
	I0915 07:00:07.353606       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:00:07.353660       1 main.go:299] handling current node
	I0915 07:00:17.361984       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:00:17.362031       1 main.go:299] handling current node
	I0915 07:00:27.354251       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:00:27.354290       1 main.go:299] handling current node
	I0915 07:00:37.353038       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:00:37.353080       1 main.go:299] handling current node
	I0915 07:00:47.353285       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:00:47.353346       1 main.go:299] handling current node
	I0915 07:00:57.358773       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:00:57.358810       1 main.go:299] handling current node
	I0915 07:01:07.362892       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:01:07.362927       1 main.go:299] handling current node
	I0915 07:01:17.362196       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:01:17.362232       1 main.go:299] handling current node
	I0915 07:01:27.353021       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:01:27.353056       1 main.go:299] handling current node
	I0915 07:01:37.353752       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 07:01:37.353878       1 main.go:299] handling current node
	
	
	==> kindnet [583b9a8f0941166da0b5e85a1ef3e551c87272617cd3792995b6c9517a084c0a] <==
	I0915 06:48:48.626360       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0915 06:48:48.626850       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0915 06:48:48.627047       1 main.go:148] setting mtu 1500 for CNI 
	I0915 06:48:48.627095       1 main.go:178] kindnetd IP family: "ipv4"
	I0915 06:48:48.627128       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0915 06:48:49.045381       1 controller.go:334] Starting controller kube-network-policies
	I0915 06:48:49.045471       1 controller.go:338] Waiting for informer caches to sync
	I0915 06:48:49.045498       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0915 06:48:51.423739       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0915 06:48:51.423850       1 metrics.go:61] Registering metrics
	I0915 06:48:51.423937       1 controller.go:374] Syncing nftables rules
	I0915 06:48:59.046044       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:48:59.046112       1 main.go:299] handling current node
	I0915 06:49:09.047261       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:49:09.047301       1 main.go:299] handling current node
	
	
	==> kube-apiserver [ddb6f02e87c2003f2ed74624b00ab8c6d02bf0ac852049c3fe464f2f7e17b01a] <==
	I0915 06:49:35.761174       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 06:49:35.820440       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 06:49:35.820525       1 aggregator.go:171] initial CRD sync complete...
	I0915 06:49:35.820537       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 06:49:35.820545       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 06:49:35.820551       1 cache.go:39] Caches are synced for autoregister controller
	I0915 06:49:35.820599       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0915 06:49:35.826131       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0915 06:49:36.656515       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 06:49:37.754636       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 06:49:37.869914       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 06:49:37.879328       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 06:49:37.926821       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 06:49:37.931949       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 06:49:53.832616       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.235.34"}
	I0915 06:49:53.840814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 06:49:53.840823       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 06:49:58.502630       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.161.12"}
	I0915 06:49:59.314822       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0915 06:49:59.397019       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.233.212"}
	I0915 06:49:59.511774       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.53.235"}
	I0915 06:51:13.736293       1 controller.go:615] quota admission added evaluator for: namespaces
	I0915 06:51:13.946553       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.11.205"}
	I0915 06:51:13.959159       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.208.35"}
	I0915 06:51:44.644135       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.100.88"}
	
	
	==> kube-controller-manager [89b9667e3df56b5f5d6ddf49b4347760baae65ef8111e1fc806e4881304dcfbe] <==
	I0915 06:48:53.694997       1 shared_informer.go:320] Caches are synced for daemon sets
	I0915 06:48:53.695018       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0915 06:48:53.694961       1 shared_informer.go:320] Caches are synced for job
	I0915 06:48:53.695030       1 shared_informer.go:320] Caches are synced for endpoint
	I0915 06:48:53.695067       1 shared_informer.go:320] Caches are synced for PVC protection
	I0915 06:48:53.695080       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-988233"
	I0915 06:48:53.695071       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0915 06:48:53.695074       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0915 06:48:53.695212       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0915 06:48:53.698265       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0915 06:48:53.699975       1 shared_informer.go:320] Caches are synced for stateful set
	I0915 06:48:53.744588       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0915 06:48:53.749846       1 shared_informer.go:320] Caches are synced for namespace
	I0915 06:48:53.775696       1 shared_informer.go:320] Caches are synced for cronjob
	I0915 06:48:53.794793       1 shared_informer.go:320] Caches are synced for service account
	I0915 06:48:53.803526       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="158.227925ms"
	I0915 06:48:53.803739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.13µs"
	I0915 06:48:53.855645       1 shared_informer.go:320] Caches are synced for HPA
	I0915 06:48:53.892122       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:48:53.898627       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:48:54.310454       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:48:54.360817       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:48:54.360850       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0915 06:48:56.952291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.098541ms"
	I0915 06:48:56.952408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.35µs"
	
	
	==> kube-controller-manager [992a89a63c2bfe77f67b4968e1bc9e2c8e43351fb52fa34745713e296461096c] <==
	I0915 06:51:13.931403       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="74.846µs"
	I0915 06:51:13.937650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.21897ms"
	I0915 06:51:13.950927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.232629ms"
	I0915 06:51:13.951030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="58.425µs"
	I0915 06:51:13.951120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="27.272µs"
	I0915 06:51:38.331511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-988233"
	I0915 06:51:39.743770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.753984ms"
	I0915 06:51:39.743877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="66.192µs"
	I0915 06:51:41.750731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="8.620333ms"
	I0915 06:51:41.750829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="58.4µs"
	I0915 06:51:44.696043       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="15.330083ms"
	I0915 06:51:44.701163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="5.075785ms"
	I0915 06:51:44.701254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="48.327µs"
	I0915 06:51:44.702025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="57.447µs"
	I0915 06:52:09.090575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-988233"
	I0915 06:53:12.921310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="80.995µs"
	I0915 06:53:26.354387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="62.323µs"
	I0915 06:55:29.352881       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="126.956µs"
	I0915 06:55:43.353847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="140.61µs"
	I0915 06:57:15.355507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-988233"
	I0915 06:57:30.354600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="138.147µs"
	I0915 06:57:42.355186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="130.262µs"
	I0915 06:59:34.354404       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="128.711µs"
	I0915 06:59:48.355589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="74.702µs"
	I0915 07:01:37.354326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="70.957µs"
	
	
	==> kube-proxy [4121a2b8b932472034d1bc8b1cd95546f34d3e8a03d8db7b74e270f1f480da04] <==
	I0915 06:49:36.938818       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:49:37.079836       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:49:37.079908       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:49:37.136509       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:49:37.136580       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:49:37.138402       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:49:37.138767       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:49:37.138798       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:49:37.140128       1 config.go:199] "Starting service config controller"
	I0915 06:49:37.140176       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:49:37.140218       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:49:37.140224       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:49:37.140876       1 config.go:328] "Starting node config controller"
	I0915 06:49:37.140954       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:49:37.241403       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:49:37.241409       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:49:37.241457       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a45a492b79ddc757624100cdbdf6b431835e9804598ea6d7f6dca1445b286574] <==
	I0915 06:48:48.637679       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:48:51.332972       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:48:51.333183       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:48:51.535507       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:48:51.535658       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:48:51.541720       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:48:51.542142       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:48:51.542182       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:48:51.543233       1 config.go:199] "Starting service config controller"
	I0915 06:48:51.543273       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:48:51.543294       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:48:51.543301       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:48:51.543845       1 config.go:328] "Starting node config controller"
	I0915 06:48:51.543868       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:48:51.644269       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:48:51.644292       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:48:51.644307       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [7ca7c7cccb4efd45e80019ea017bb337d062924034af15a4074def83557b1840] <==
	I0915 06:48:49.637406       1 serving.go:386] Generated self-signed cert in-memory
	I0915 06:48:51.427150       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 06:48:51.427252       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:48:51.432872       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0915 06:48:51.432907       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 06:48:51.432914       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0915 06:48:51.432980       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0915 06:48:51.433000       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:48:51.432929       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:48:51.433680       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 06:48:51.433760       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 06:48:51.533742       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:48:51.533790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:48:51.533745       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	E0915 06:49:15.450626       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a09a9c2619b217bf22a942008e5719652a4c95aeeb69b4c64cf31efab26415f0] <==
	I0915 06:49:33.947883       1 serving.go:386] Generated self-signed cert in-memory
	I0915 06:49:35.742660       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 06:49:35.742684       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:49:35.746341       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0915 06:49:35.746356       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 06:49:35.746377       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0915 06:49:35.746378       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:49:35.746378       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0915 06:49:35.746395       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:49:35.746616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 06:49:35.747456       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 06:49:35.846928       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:49:35.846954       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0915 06:49:35.846938       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Sep 15 07:00:42 functional-988233 kubelet[5178]: E0915 07:00:42.544569    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383642544390184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:42 functional-988233 kubelet[5178]: E0915 07:00:42.544606    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383642544390184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:43 functional-988233 kubelet[5178]: E0915 07:00:43.345560    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="5c05705e-08b2-4fe9-924f-f55145b976f8"
	Sep 15 07:00:49 functional-988233 kubelet[5178]: E0915 07:00:49.345886    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="17288952-bc71-4475-95b5-3b5ceb1e6ca7"
	Sep 15 07:00:52 functional-988233 kubelet[5178]: E0915 07:00:52.546630    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383652546480772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:52 functional-988233 kubelet[5178]: E0915 07:00:52.546659    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383652546480772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:00:58 functional-988233 kubelet[5178]: E0915 07:00:58.346027    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="5c05705e-08b2-4fe9-924f-f55145b976f8"
	Sep 15 07:01:02 functional-988233 kubelet[5178]: E0915 07:01:02.548190    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383662548002248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:02 functional-988233 kubelet[5178]: E0915 07:01:02.548243    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383662548002248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:11 functional-988233 kubelet[5178]: E0915 07:01:11.345655    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="5c05705e-08b2-4fe9-924f-f55145b976f8"
	Sep 15 07:01:12 functional-988233 kubelet[5178]: E0915 07:01:12.549601    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383672549436549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:12 functional-988233 kubelet[5178]: E0915 07:01:12.549631    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383672549436549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:22 functional-988233 kubelet[5178]: E0915 07:01:22.346347    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="5c05705e-08b2-4fe9-924f-f55145b976f8"
	Sep 15 07:01:22 functional-988233 kubelet[5178]: E0915 07:01:22.550953    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383682550795535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:22 functional-988233 kubelet[5178]: E0915 07:01:22.550987    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383682550795535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:23 functional-988233 kubelet[5178]: E0915 07:01:23.180570    5178 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 15 07:01:23 functional-988233 kubelet[5178]: E0915 07:01:23.180638    5178 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 15 07:01:23 functional-988233 kubelet[5178]: E0915 07:01:23.180918    5178 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpx56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-b264w_default(759c931a-17f9-489f-ab67-575f4cbb603b): ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 15 07:01:23 functional-988233 kubelet[5178]: E0915 07:01:23.182169    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-b264w" podUID="759c931a-17f9-489f-ab67-575f4cbb603b"
	Sep 15 07:01:32 functional-988233 kubelet[5178]: E0915 07:01:32.552319    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383692552133500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:32 functional-988233 kubelet[5178]: E0915 07:01:32.552364    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383692552133500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:34 functional-988233 kubelet[5178]: E0915 07:01:34.346063    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="5c05705e-08b2-4fe9-924f-f55145b976f8"
	Sep 15 07:01:37 functional-988233 kubelet[5178]: E0915 07:01:37.345938    5178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-b264w" podUID="759c931a-17f9-489f-ab67-575f4cbb603b"
	Sep 15 07:01:42 functional-988233 kubelet[5178]: E0915 07:01:42.554434    5178 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383702554255432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 15 07:01:42 functional-988233 kubelet[5178]: E0915 07:01:42.554468    5178 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726383702554255432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [3388d493d47a038b791c2922ded0409cd3571367901f43b52867256c239523fb] <==
	2024/09/15 06:51:38 Starting overwatch
	2024/09/15 06:51:38 Using namespace: kubernetes-dashboard
	2024/09/15 06:51:38 Using in-cluster config to connect to apiserver
	2024/09/15 06:51:38 Using secret token for csrf signing
	2024/09/15 06:51:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/15 06:51:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/15 06:51:38 Successful initial request to the apiserver, version: v1.31.1
	2024/09/15 06:51:38 Generating JWE encryption key
	2024/09/15 06:51:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/15 06:51:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/15 06:51:39 Initializing JWE encryption key from synchronized object
	2024/09/15 06:51:39 Creating in-cluster Sidecar client
	2024/09/15 06:51:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/15 06:51:39 Serving insecurely on HTTP port: 9090
	2024/09/15 06:52:09 Successful request to sidecar
	
	
	==> storage-provisioner [91f3c99c8863a42d812cac899056f76ee39ebd50467451d7ebf6c94927730e07] <==
	I0915 06:49:00.989050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:49:00.995575       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:49:00.995619       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [c8fcd9acbc55f6e73b9639cd9447905e27e6f6b5bae721c7fc13d4651636c728] <==
	I0915 06:49:36.832493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:49:36.842853       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:49:36.842982       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:49:54.239435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:49:54.239559       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1fadc95c-a30d-48ca-a487-bec3dd4dfae8", APIVersion:"v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-988233_14686427-d552-4c33-b4a8-8e679a30644a became leader
	I0915 06:49:54.239678       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-988233_14686427-d552-4c33-b4a8-8e679a30644a!
	I0915 06:49:54.339997       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-988233_14686427-d552-4c33-b4a8-8e679a30644a!
	I0915 06:50:04.492108       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0915 06:50:04.492277       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a7fc3dd8-d801-4bcc-ab44-39ef614cf55b", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0915 06:50:04.492185       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ed5fd836-deb2-43ee-813f-880e4f3d8295 343 0 2024-09-15 06:47:55 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-15 06:47:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  a7fc3dd8-d801-4bcc-ab44-39ef614cf55b 690 0 2024-09-15 06:50:04 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-15 06:50:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-15 06:50:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0915 06:50:04.492619       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b" provisioned
	I0915 06:50:04.492646       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0915 06:50:04.492653       1 volume_store.go:212] Trying to save persistentvolume "pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b"
	I0915 06:50:04.501181       1 volume_store.go:219] persistentvolume "pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b" saved
	I0915 06:50:04.501275       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a7fc3dd8-d801-4bcc-ab44-39ef614cf55b", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a7fc3dd8-d801-4bcc-ab44-39ef614cf55b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-988233 -n functional-988233
helpers_test.go:261: (dbg) Run:  kubectl --context functional-988233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-b264w nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-988233 describe pod busybox-mount mysql-6cdb49bbb-b264w nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-988233 describe pod busybox-mount mysql-6cdb49bbb-b264w nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:51:12 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://86b1c49345dd4e67e996c14369949d1b19798ea772f28a79fdc74a0ea0da0e4a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 15 Sep 2024 06:51:34 +0000
	      Finished:     Sun, 15 Sep 2024 06:51:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgg9c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vgg9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-988233
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 857ms (21.719s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-b264w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:51:44 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hpx56 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hpx56:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-b264w to functional-988233
	  Normal   Pulling    3m41s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m27s (x4 over 8m35s)  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m27s (x4 over 8m35s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    119s (x7 over 8m35s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     119s (x7 over 8m35s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:49:58 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v2p6l (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v2p6l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  11m                    default-scheduler  Successfully assigned default/nginx-svc to functional-988233
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m40s (x4 over 11m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m31s (x4 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m31s (x3 over 9m36s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m2s (x7 over 10m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    98s (x18 over 10m)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-988233/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:50:04 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rdgxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-rdgxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/sp-pod to functional-988233
	  Normal   Pulling    5m14s (x4 over 11m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m59s (x4 over 10m)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m59s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m34s (x7 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    101s (x12 over 10m)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-988233 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [17288952-bc71-4475-95b5-3b5ceb1e6ca7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-988233 -n functional-988233
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2024-09-15 06:53:58.782003277 +0000 UTC m=+1475.918187118
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-988233 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-988233 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-988233/192.168.49.2
Start Time:       Sun, 15 Sep 2024 06:49:58 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v2p6l (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-v2p6l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/nginx-svc to functional-988233
Warning  Failed     2m58s                 kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     107s (x2 over 2m58s)  kubelet            Error: ErrImagePull
Warning  Failed     107s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    92s (x2 over 2m58s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     92s (x2 over 2m58s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    81s (x3 over 4m)      kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-988233 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-988233 logs nginx-svc -n default: exit status 1 (57.932799ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-988233 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (484.743763ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:344: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image load --daemon kicbase/echo-server:functional-988233 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-988233" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image load --daemon kicbase/echo-server:functional-988233 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-988233" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (431.113088ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:237: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image save kicbase/echo-server:functional-988233 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0915 06:51:43.661697   59911 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:43.661862   59911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:43.661873   59911 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:43.661877   59911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:43.662058   59911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:51:43.662628   59911 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:43.662722   59911 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:43.663099   59911 cli_runner.go:164] Run: docker container inspect functional-988233 --format={{.State.Status}}
	I0915 06:51:43.679843   59911 ssh_runner.go:195] Run: systemctl --version
	I0915 06:51:43.679892   59911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988233
	I0915 06:51:43.696251   59911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/functional-988233/id_rsa Username:docker}
	I0915 06:51:43.784267   59911 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W0915 06:51:43.784325   59911 cache_images.go:253] Failed to load cached images for "functional-988233": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I0915 06:51:43.784356   59911 cache_images.go:265] failed pushing to: functional-988233

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-988233
functional_test.go:419: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-988233: exit status 1 (15.944464ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-988233

                                                
                                                
** /stderr **
functional_test.go:421: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-988233

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-988233 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.101.161.12   10.101.161.12   80:31728/TCP   5m48s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.13s)

                                                
                                    

Test pass (287/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.87
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 3.79
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.05
21 TestBinaryMirror 0.73
22 TestOffline 54.89
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 179.46
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.64
37 TestAddons/parallel/HelmTiller 11.69
39 TestAddons/parallel/CSI 55.57
40 TestAddons/parallel/Headlamp 58.3
41 TestAddons/parallel/CloudSpanner 5.46
43 TestAddons/parallel/NvidiaDevicePlugin 5.43
44 TestAddons/parallel/Yakd 11.63
45 TestAddons/StoppedEnableDisable 12.05
46 TestCertOptions 25.36
47 TestCertExpiration 217.92
49 TestForceSystemdFlag 26.82
50 TestForceSystemdEnv 29.65
52 TestKVMDriverInstallOrUpdate 3.24
56 TestErrorSpam/setup 20.42
57 TestErrorSpam/start 0.55
58 TestErrorSpam/status 0.83
59 TestErrorSpam/pause 1.47
60 TestErrorSpam/unpause 1.63
61 TestErrorSpam/stop 1.33
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 67.32
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.99
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.97
73 TestFunctional/serial/CacheCmd/cache/add_local 1.32
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 36.81
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.28
84 TestFunctional/serial/LogsFileCmd 1.31
85 TestFunctional/serial/InvalidService 3.85
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 32.82
89 TestFunctional/parallel/DryRun 0.33
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.86
95 TestFunctional/parallel/ServiceCmdConnect 70.47
96 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/SSHCmd 0.58
100 TestFunctional/parallel/CpCmd 1.72
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.73
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
111 TestFunctional/parallel/License 0.21
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/ServiceCmd/DeployApp 70.14
118 TestFunctional/parallel/ServiceCmd/List 0.47
119 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
121 TestFunctional/parallel/ProfileCmd/profile_list 0.33
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
124 TestFunctional/parallel/MountCmd/any-port 26.61
125 TestFunctional/parallel/ServiceCmd/Format 0.35
126 TestFunctional/parallel/ServiceCmd/URL 0.31
127 TestFunctional/parallel/MountCmd/specific-port 2.01
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
133 TestFunctional/parallel/ImageCommands/ImageBuild 1.83
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
142 TestFunctional/parallel/Version/short 0.04
143 TestFunctional/parallel/Version/components 0.44
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 149.78
159 TestMultiControlPlane/serial/DeployApp 3.93
160 TestMultiControlPlane/serial/PingHostFromPods 0.98
161 TestMultiControlPlane/serial/AddWorkerNode 30.03
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.64
164 TestMultiControlPlane/serial/CopyFile 15.27
165 TestMultiControlPlane/serial/StopSecondaryNode 12.44
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
167 TestMultiControlPlane/serial/RestartSecondaryNode 20.31
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 15.54
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 186.88
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.08
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
172 TestMultiControlPlane/serial/StopCluster 35.52
173 TestMultiControlPlane/serial/RestartCluster 107.51
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.45
175 TestMultiControlPlane/serial/AddSecondaryNode 66.35
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.62
180 TestJSONOutput/start/Command 38.01
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.64
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.58
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.72
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.19
205 TestKicCustomNetwork/create_custom_network 27.01
206 TestKicCustomNetwork/use_default_bridge_network 26.65
207 TestKicExistingNetwork 23.52
208 TestKicCustomSubnet 25.81
209 TestKicStaticIP 25.84
210 TestMainNoArgs 0.04
211 TestMinikubeProfile 51.26
214 TestMountStart/serial/StartWithMountFirst 5.48
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 8.2
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.58
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 7.16
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 66.01
226 TestMultiNode/serial/DeployApp2Nodes 3.55
227 TestMultiNode/serial/PingHostFrom2Pods 0.67
228 TestMultiNode/serial/AddNode 29
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.28
231 TestMultiNode/serial/CopyFile 8.78
232 TestMultiNode/serial/StopNode 2.07
233 TestMultiNode/serial/StartAfterStop 9.4
234 TestMultiNode/serial/RestartKeepsNodes 102.69
235 TestMultiNode/serial/DeleteNode 5.17
236 TestMultiNode/serial/StopMultiNode 23.67
237 TestMultiNode/serial/RestartMultiNode 49.88
238 TestMultiNode/serial/ValidateNameConflict 22.47
243 TestPreload 101.4
245 TestScheduledStopUnix 97.4
248 TestInsufficientStorage 12.43
249 TestRunningBinaryUpgrade 57.83
251 TestKubernetesUpgrade 347.67
252 TestMissingContainerUpgrade 130.56
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
255 TestStoppedBinaryUpgrade/Setup 0.47
263 TestNoKubernetes/serial/StartWithK8s 34.22
264 TestStoppedBinaryUpgrade/Upgrade 89.65
265 TestNoKubernetes/serial/StartWithStopK8s 12.12
266 TestNoKubernetes/serial/Start 7.77
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
268 TestNoKubernetes/serial/ProfileList 11.44
269 TestNoKubernetes/serial/Stop 4.06
270 TestNoKubernetes/serial/StartNoArgs 9.8
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
274 TestPause/serial/Start 76.51
282 TestNetworkPlugins/group/false 3.01
286 TestPause/serial/SecondStartNoReconfiguration 38.8
287 TestPause/serial/Pause 0.7
288 TestPause/serial/VerifyStatus 0.29
289 TestPause/serial/Unpause 0.6
290 TestPause/serial/PauseAgain 0.73
291 TestPause/serial/DeletePaused 2.33
292 TestPause/serial/VerifyDeletedResources 2.35
294 TestStartStop/group/old-k8s-version/serial/FirstStart 143.25
296 TestStartStop/group/embed-certs/serial/FirstStart 71.23
297 TestStartStop/group/embed-certs/serial/DeployApp 8.23
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.8
299 TestStartStop/group/embed-certs/serial/Stop 11.85
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
301 TestStartStop/group/embed-certs/serial/SecondStart 261.98
303 TestStartStop/group/no-preload/serial/FirstStart 52.58
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.48
306 TestStartStop/group/old-k8s-version/serial/Stop 13.26
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/old-k8s-version/serial/SecondStart 138.2
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.66
311 TestStartStop/group/no-preload/serial/DeployApp 8.26
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
313 TestStartStop/group/no-preload/serial/Stop 12.97
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
315 TestStartStop/group/no-preload/serial/SecondStart 262.32
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.24
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.86
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.85
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.32
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
324 TestStartStop/group/old-k8s-version/serial/Pause 2.53
326 TestStartStop/group/newest-cni/serial/FirstStart 28.26
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.81
329 TestStartStop/group/newest-cni/serial/Stop 1.22
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
331 TestStartStop/group/newest-cni/serial/SecondStart 12.74
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
336 TestStartStop/group/newest-cni/serial/Pause 2.6
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
338 TestNetworkPlugins/group/auto/Start 42.74
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
340 TestStartStop/group/embed-certs/serial/Pause 2.89
341 TestNetworkPlugins/group/kindnet/Start 39.32
342 TestNetworkPlugins/group/auto/KubeletFlags 0.25
343 TestNetworkPlugins/group/auto/NetCatPod 9.18
344 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
345 TestNetworkPlugins/group/auto/DNS 0.12
346 TestNetworkPlugins/group/auto/Localhost 0.1
347 TestNetworkPlugins/group/auto/HairPin 0.1
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
349 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
350 TestNetworkPlugins/group/kindnet/DNS 0.13
351 TestNetworkPlugins/group/kindnet/Localhost 0.12
352 TestNetworkPlugins/group/kindnet/HairPin 0.1
353 TestNetworkPlugins/group/calico/Start 55.81
354 TestNetworkPlugins/group/custom-flannel/Start 47.85
355 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
356 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
357 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
358 TestStartStop/group/no-preload/serial/Pause 2.61
359 TestNetworkPlugins/group/enable-default-cni/Start 66.33
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
362 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
363 TestNetworkPlugins/group/calico/KubeletFlags 0.3
364 TestNetworkPlugins/group/calico/NetCatPod 9.18
365 TestNetworkPlugins/group/custom-flannel/DNS 0.12
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
367 TestNetworkPlugins/group/calico/DNS 0.15
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
369 TestNetworkPlugins/group/calico/Localhost 0.11
370 TestNetworkPlugins/group/calico/HairPin 0.1
371 TestNetworkPlugins/group/flannel/Start 48.69
372 TestNetworkPlugins/group/bridge/Start 67.26
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.95
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
384 TestNetworkPlugins/group/flannel/NetCatPod 10.19
385 TestNetworkPlugins/group/flannel/DNS 0.12
386 TestNetworkPlugins/group/flannel/Localhost 0.1
387 TestNetworkPlugins/group/flannel/HairPin 0.1
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
389 TestNetworkPlugins/group/bridge/NetCatPod 9.17
390 TestNetworkPlugins/group/bridge/DNS 0.13
391 TestNetworkPlugins/group/bridge/Localhost 0.11
392 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (4.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-319436 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-319436 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.867784794s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-319436
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-319436: exit status 85 (56.428395ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-319436 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |          |
	|         | -p download-only-319436        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:22.934890   12603 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:22.934981   12603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:22.934988   12603 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:22.934993   12603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:22.935160   12603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	W0915 06:29:22.935284   12603 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19644-5979/.minikube/config/config.json: open /home/jenkins/minikube-integration/19644-5979/.minikube/config/config.json: no such file or directory
	I0915 06:29:22.935855   12603 out.go:352] Setting JSON to true
	I0915 06:29:22.936742   12603 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":714,"bootTime":1726381049,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:29:22.936841   12603 start.go:139] virtualization: kvm guest
	I0915 06:29:22.939245   12603 out.go:97] [download-only-319436] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0915 06:29:22.939337   12603 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19644-5979/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 06:29:22.939369   12603 notify.go:220] Checking for updates...
	I0915 06:29:22.940862   12603 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:29:22.942371   12603 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:22.943752   12603 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:29:22.944939   12603 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:29:22.946253   12603 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0915 06:29:22.948490   12603 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:29:22.948674   12603 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:22.970064   12603 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:22.970160   12603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:23.329959   12603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 06:29:23.320563931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:23.330065   12603 docker.go:318] overlay module found
	I0915 06:29:23.331799   12603 out.go:97] Using the docker driver based on user configuration
	I0915 06:29:23.331824   12603 start.go:297] selected driver: docker
	I0915 06:29:23.331835   12603 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:23.331931   12603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:23.376576   12603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 06:29:23.368462822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:23.376738   12603 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:23.377235   12603 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0915 06:29:23.377385   12603 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:29:23.379102   12603 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-319436 host does not exist
	  To start a cluster, run: "minikube start -p download-only-319436"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-319436
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-993247 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-993247 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.792308038s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-993247
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-993247: exit status 85 (56.637722ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-319436 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-319436        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-319436        | download-only-319436 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | -o=json --download-only        | download-only-993247 | jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-993247        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:28.182137   12963 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:29:28.182398   12963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:28.182411   12963 out.go:358] Setting ErrFile to fd 2...
	I0915 06:29:28.182419   12963 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:28.182591   12963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:29:28.183163   12963 out.go:352] Setting JSON to true
	I0915 06:29:28.183945   12963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":719,"bootTime":1726381049,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:29:28.184032   12963 start.go:139] virtualization: kvm guest
	I0915 06:29:28.186305   12963 out.go:97] [download-only-993247] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:29:28.186462   12963 notify.go:220] Checking for updates...
	I0915 06:29:28.187845   12963 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:29:28.189395   12963 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:29:28.190759   12963 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:29:28.192221   12963 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:29:28.193825   12963 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0915 06:29:28.196678   12963 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:29:28.196897   12963 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:28.218301   12963 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:29:28.218404   12963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:28.264144   12963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-15 06:29:28.255126871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:28.264267   12963 docker.go:318] overlay module found
	I0915 06:29:28.265954   12963 out.go:97] Using the docker driver based on user configuration
	I0915 06:29:28.265981   12963 start.go:297] selected driver: docker
	I0915 06:29:28.265986   12963 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:28.266052   12963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:28.310869   12963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-15 06:29:28.302102134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:29:28.311015   12963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:28.311477   12963 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0915 06:29:28.311660   12963 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:29:28.313574   12963 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-993247 host does not exist
	  To start a cluster, run: "minikube start -p download-only-993247"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-993247
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-583228 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-583228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-583228
--- PASS: TestDownloadOnlyKic (1.05s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-350163 --alsologtostderr --binary-mirror http://127.0.0.1:33455 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-350163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-350163
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (54.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-556571 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-556571 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (52.510243593s)
helpers_test.go:175: Cleaning up "offline-crio-556571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-556571
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-556571: (2.37547383s)
--- PASS: TestOffline (54.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-022322
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-022322: exit status 85 (54.255156ms)

                                                
                                                
-- stdout --
	* Profile "addons-022322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-022322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-022322
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-022322: exit status 85 (56.333757ms)

                                                
                                                
-- stdout --
	* Profile "addons-022322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-022322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (179.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-022322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-022322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m59.455270522s)
--- PASS: TestAddons/Setup (179.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-022322 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-022322 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gbvq2" [1698ee6d-e088-4254-ba8f-689d760f5a03] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003759089s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-022322
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-022322: (5.630476954s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.048343ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-tpczq" [e9d5480f-8c59-4ab5-b5fc-a6fcd1801c51] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004619529s
addons_test.go:475: (dbg) Run:  kubectl --context addons-022322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-022322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.166541415s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.925741ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-022322 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-022322 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [93e4b375-117c-4dfc-8f9b-d1a147a4b96d] Pending
helpers_test.go:344: "task-pv-pod" [93e4b375-117c-4dfc-8f9b-d1a147a4b96d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [93e4b375-117c-4dfc-8f9b-d1a147a4b96d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004594364s
addons_test.go:590: (dbg) Run:  kubectl --context addons-022322 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-022322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-022322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-022322 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-022322 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-022322 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-022322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-022322 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [10c14360-3cdb-432a-ae7c-810a48a221c6] Pending
helpers_test.go:344: "task-pv-pod-restore" [10c14360-3cdb-432a-ae7c-810a48a221c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [10c14360-3cdb-432a-ae7c-810a48a221c6] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003667358s
addons_test.go:632: (dbg) Run:  kubectl --context addons-022322 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-022322 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-022322 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.580770728s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (58.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-022322 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-5bgtr" [4dccb0dd-ecfc-4d1f-9257-347397ec92c7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-5bgtr" [4dccb0dd-ecfc-4d1f-9257-347397ec92c7] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 52.004083052s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 addons disable headlamp --alsologtostderr -v=1: (5.57775086s)
--- PASS: TestAddons/parallel/Headlamp (58.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-xprfl" [202101a0-07c5-45cd-8c10-860a8c3655d3] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003769444s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-022322
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7x4t6" [549d014b-a13d-466e-8959-d22764717045] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003078236s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-022322
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xrfvn" [4ffc8ebf-f530-4c2d-8279-9b82d1b3c170] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003442137s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-022322 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-022322 addons disable yakd --alsologtostderr -v=1: (5.624937287s)
--- PASS: TestAddons/parallel/Yakd (11.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-022322
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-022322: (11.817514409s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-022322
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-022322
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-022322
--- PASS: TestAddons/StoppedEnableDisable (12.05s)

                                                
                                    
x
+
TestCertOptions (25.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-958421 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-958421 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.793648414s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-958421 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-958421 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-958421 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-958421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-958421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-958421: (3.922014525s)
--- PASS: TestCertOptions (25.36s)

                                                
                                    
x
+
TestCertExpiration (217.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-370110 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-370110 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.822829993s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-370110 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-370110 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (13.877425912s)
helpers_test.go:175: Cleaning up "cert-expiration-370110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-370110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-370110: (2.216396867s)
--- PASS: TestCertExpiration (217.92s)

                                                
                                    
x
+
TestForceSystemdFlag (26.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-058781 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-058781 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.179126703s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-058781 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-058781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-058781
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-058781: (2.368480785s)
--- PASS: TestForceSystemdFlag (26.82s)

                                                
                                    
x
+
TestForceSystemdEnv (29.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-107421 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-107421 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.337913408s)
helpers_test.go:175: Cleaning up "force-systemd-env-107421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-107421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-107421: (2.307991835s)
--- PASS: TestForceSystemdEnv (29.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.24s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.24s)

                                                
                                    
x
+
TestErrorSpam/setup (20.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-073846 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-073846 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-073846 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-073846 --driver=docker  --container-runtime=crio: (20.424002626s)
--- PASS: TestErrorSpam/setup (20.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 stop: (1.164297677s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-073846 --log_dir /tmp/nospam-073846 stop
--- PASS: TestErrorSpam/stop (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19644-5979/.minikube/files/etc/test/nested/copy/12591/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988233 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0915 06:47:34.135256   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:34.142043   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:34.153463   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:34.174971   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:34.216386   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:34.297861   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:34.459447   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:34.781178   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:35.423217   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:36.704822   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:39.267310   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:44.389138   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:47:54.631183   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:48:15.113177   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-988233 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m7.314528606s)
--- PASS: TestFunctional/serial/StartWithProxy (67.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988233 --alsologtostderr -v=8
E0915 06:48:56.074581   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-988233 --alsologtostderr -v=8: (27.985759091s)
functional_test.go:663: soft start took 27.986491018s for "functional-988233" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-988233 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-988233 cache add registry.k8s.io/pause:3.3: (1.008308316s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-988233 cache add registry.k8s.io/pause:latest: (1.026685506s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-988233 /tmp/TestFunctionalserialCacheCmdcacheadd_local1669114421/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cache add minikube-local-cache-test:functional-988233
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cache delete minikube-local-cache-test:functional-988233
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-988233
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (262.212141ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 kubectl -- --context functional-988233 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-988233 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988233 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-988233 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.808016651s)
functional_test.go:761: restart took 36.808160081s for "functional-988233" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-988233 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-988233 logs: (1.281022079s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 logs --file /tmp/TestFunctionalserialLogsFileCmd3638241448/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-988233 logs --file /tmp/TestFunctionalserialLogsFileCmd3638241448/001/logs.txt: (1.311162675s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-988233 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-988233
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-988233: exit status 115 (313.302529ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31848 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-988233 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 config get cpus: exit status 14 (62.255744ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 config get cpus: exit status 14 (62.083125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-988233 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-988233 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 57549: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-988233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (139.554894ms)

                                                
                                                
-- stdout --
	* [functional-988233] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:51:12.520744   57166 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:12.520859   57166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.520869   57166 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:12.520874   57166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.521090   57166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:51:12.521634   57166 out.go:352] Setting JSON to false
	I0915 06:51:12.522842   57166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2024,"bootTime":1726381049,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:51:12.522956   57166 start.go:139] virtualization: kvm guest
	I0915 06:51:12.525003   57166 out.go:177] * [functional-988233] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 06:51:12.526505   57166 notify.go:220] Checking for updates...
	I0915 06:51:12.526573   57166 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:51:12.527990   57166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:12.529585   57166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:51:12.531019   57166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:51:12.532396   57166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:12.533724   57166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:12.535412   57166 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:12.535924   57166 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:12.559079   57166 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:51:12.559229   57166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:12.608371   57166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:51:12.599299746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:51:12.608483   57166 docker.go:318] overlay module found
	I0915 06:51:12.610403   57166 out.go:177] * Using the docker driver based on existing profile
	I0915 06:51:12.611663   57166 start.go:297] selected driver: docker
	I0915 06:51:12.611678   57166 start.go:901] validating driver "docker" against &{Name:functional-988233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:12.611785   57166 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:12.613840   57166 out.go:201] 
	W0915 06:51:12.615122   57166 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 06:51:12.616748   57166 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988233 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-988233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-988233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (152.268509ms)

                                                
                                                
-- stdout --
	* [functional-988233] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:51:12.378051   57089 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:51:12.378159   57089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.378169   57089 out.go:358] Setting ErrFile to fd 2...
	I0915 06:51:12.378173   57089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:51:12.378422   57089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 06:51:12.378945   57089 out.go:352] Setting JSON to false
	I0915 06:51:12.379875   57089 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2023,"bootTime":1726381049,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 06:51:12.379959   57089 start.go:139] virtualization: kvm guest
	I0915 06:51:12.382403   57089 out.go:177] * [functional-988233] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0915 06:51:12.383903   57089 notify.go:220] Checking for updates...
	I0915 06:51:12.383910   57089 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:51:12.385991   57089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:51:12.387502   57089 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 06:51:12.388837   57089 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 06:51:12.390145   57089 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 06:51:12.391290   57089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:51:12.392994   57089 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 06:51:12.393434   57089 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:51:12.419977   57089 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:51:12.420065   57089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:51:12.468632   57089 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:51:12.458130877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 06:51:12.468738   57089 docker.go:318] overlay module found
	I0915 06:51:12.470574   57089 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 06:51:12.471833   57089 start.go:297] selected driver: docker
	I0915 06:51:12.471847   57089 start.go:901] validating driver "docker" against &{Name:functional-988233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:51:12.471927   57089 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:51:12.473972   57089 out.go:201] 
	W0915 06:51:12.475244   57089 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 06:51:12.476438   57089 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (70.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-988233 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-988233 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-89pwf" [55b3e4bc-0ecd-4704-87f2-bb30d2e18ec5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-89pwf" [55b3e4bc-0ecd-4704-87f2-bb30d2e18ec5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 1m10.003659184s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30163
functional_test.go:1675: http://192.168.49.2:30163: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-89pwf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30163
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (70.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh -n functional-988233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cp functional-988233:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2326768014/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh -n functional-988233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh -n functional-988233 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/12591/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo cat /etc/test/nested/copy/12591/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/12591.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo cat /etc/ssl/certs/12591.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/12591.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo cat /usr/share/ca-certificates/12591.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/125912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo cat /etc/ssl/certs/125912.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/125912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo cat /usr/share/ca-certificates/125912.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-988233 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh "sudo systemctl is-active docker": exit status 1 (243.099838ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh "sudo systemctl is-active containerd": exit status 1 (240.016972ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-988233 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-988233 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-988233 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 53640: os: process already finished
helpers_test.go:502: unable to terminate pid 53225: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-988233 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-988233 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (70.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-988233 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-988233 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-lsj6m" [53b13a7f-231d-45f7-a71a-9d87a1d138ef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-lsj6m" [53b13a7f-231d-45f7-a71a-9d87a1d138ef] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m10.003804966s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (70.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 service list -o json
functional_test.go:1494: Took "477.426687ms" to run "out/minikube-linux-amd64 -p functional-988233 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "283.683066ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.195942ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "304.830562ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.90546ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32499
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (26.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdany-port110036916/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726383070754545503" to /tmp/TestFunctionalparallelMountCmdany-port110036916/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726383070754545503" to /tmp/TestFunctionalparallelMountCmdany-port110036916/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726383070754545503" to /tmp/TestFunctionalparallelMountCmdany-port110036916/001/test-1726383070754545503
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.683219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 06:51 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 06:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 06:51 test-1726383070754545503
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh cat /mount-9p/test-1726383070754545503
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-988233 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9ba38985-c910-4b42-9164-f0a898a058fb] Pending
helpers_test.go:344: "busybox-mount" [9ba38985-c910-4b42-9164-f0a898a058fb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9ba38985-c910-4b42-9164-f0a898a058fb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9ba38985-c910-4b42-9164-f0a898a058fb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.003780453s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-988233 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdany-port110036916/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (26.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32499
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdspecific-port2550908310/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.042546ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdspecific-port2550908310/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh "sudo umount -f /mount-9p": exit status 1 (247.161756ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-988233 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdspecific-port2550908310/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3695559384/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3695559384/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3695559384/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T" /mount1: exit status 1 (303.345131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-988233 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3695559384/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3695559384/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-988233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3695559384/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988233 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-988233
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988233 image ls --format short --alsologtostderr:
I0915 06:51:45.731169   60326 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:45.731417   60326 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:45.731427   60326 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:45.731432   60326 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:45.731616   60326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
I0915 06:51:45.732221   60326 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:45.732341   60326 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:45.732737   60326 cli_runner.go:164] Run: docker container inspect functional-988233 --format={{.State.Status}}
I0915 06:51:45.750650   60326 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:45.750692   60326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988233
I0915 06:51:45.767220   60326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/functional-988233/id_rsa Username:docker}
I0915 06:51:45.856479   60326 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988233 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| localhost/minikube-local-cache-test     | functional-988233  | 546aba23b07fa | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988233 image ls --format table --alsologtostderr:
I0915 06:51:46.339455   60481 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:46.339709   60481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:46.339720   60481 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:46.339724   60481 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:46.339917   60481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
I0915 06:51:46.340543   60481 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:46.340645   60481 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:46.341007   60481 cli_runner.go:164] Run: docker container inspect functional-988233 --format={{.State.Status}}
I0915 06:51:46.360227   60481 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:46.360277   60481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988233
I0915 06:51:46.377196   60481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/functional-988233/id_rsa Username:docker}
I0915 06:51:46.472550   60481 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988233 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"546aba23b07fa389bfa7ff7bf2a19b23fe4d34c7841120ef1b34716ee2e9cabc","repoDigests":["localhost/minikube-local-cache-test@sha256:d33b01713e6764b78a013ccc97bcc5dff9720563dde3850a9bedd2bc9bf1080a"],"repoTags":["localhost/minikube-local-cache-test:functional-988233"],"size":"3330"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id"
:"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["r
egistry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/p
ause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kinde
st/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988233 image ls --format json --alsologtostderr:
I0915 06:51:46.141697   60431 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:46.141813   60431 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:46.141822   60431 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:46.141826   60431 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:46.141995   60431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
I0915 06:51:46.142537   60431 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:46.142627   60431 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:46.142996   60431 cli_runner.go:164] Run: docker container inspect functional-988233 --format={{.State.Status}}
I0915 06:51:46.160349   60431 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:46.160402   60431 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988233
I0915 06:51:46.176594   60431 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/functional-988233/id_rsa Username:docker}
I0915 06:51:46.264166   60431 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988233 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 546aba23b07fa389bfa7ff7bf2a19b23fe4d34c7841120ef1b34716ee2e9cabc
repoDigests:
- localhost/minikube-local-cache-test@sha256:d33b01713e6764b78a013ccc97bcc5dff9720563dde3850a9bedd2bc9bf1080a
repoTags:
- localhost/minikube-local-cache-test:functional-988233
size: "3330"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988233 image ls --format yaml --alsologtostderr:
I0915 06:51:45.935281   60378 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:45.935389   60378 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:45.935399   60378 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:45.935403   60378 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:45.935616   60378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
I0915 06:51:45.936173   60378 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:45.936296   60378 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:45.936706   60378 cli_runner.go:164] Run: docker container inspect functional-988233 --format={{.State.Status}}
I0915 06:51:45.954037   60378 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:45.954091   60378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988233
I0915 06:51:45.969835   60378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/functional-988233/id_rsa Username:docker}
I0915 06:51:46.060450   60378 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-988233 ssh pgrep buildkitd: exit status 1 (233.727367ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image build -t localhost/my-image:functional-988233 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-988233 image build -t localhost/my-image:functional-988233 testdata/build --alsologtostderr: (1.38464756s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-988233 image build -t localhost/my-image:functional-988233 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8f1b29ee4a5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-988233
--> 275bb754d7d
Successfully tagged localhost/my-image:functional-988233
275bb754d7d22d2f6e27b4c9e69d9839e9f57354c08bb98e25db1481825b1ad1
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-988233 image build -t localhost/my-image:functional-988233 testdata/build --alsologtostderr:
I0915 06:51:46.786321   60629 out.go:345] Setting OutFile to fd 1 ...
I0915 06:51:46.786491   60629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:46.786502   60629 out.go:358] Setting ErrFile to fd 2...
I0915 06:51:46.786508   60629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:51:46.786693   60629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
I0915 06:51:46.787318   60629 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:46.787893   60629 config.go:182] Loaded profile config "functional-988233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0915 06:51:46.788304   60629 cli_runner.go:164] Run: docker container inspect functional-988233 --format={{.State.Status}}
I0915 06:51:46.805732   60629 ssh_runner.go:195] Run: systemctl --version
I0915 06:51:46.805781   60629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988233
I0915 06:51:46.823256   60629 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/functional-988233/id_rsa Username:docker}
I0915 06:51:46.916334   60629 build_images.go:161] Building image from path: /tmp/build.3213463486.tar
I0915 06:51:46.916418   60629 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 06:51:46.924345   60629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3213463486.tar
I0915 06:51:46.927333   60629 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3213463486.tar: stat -c "%s %y" /var/lib/minikube/build/build.3213463486.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3213463486.tar': No such file or directory
I0915 06:51:46.927358   60629 ssh_runner.go:362] scp /tmp/build.3213463486.tar --> /var/lib/minikube/build/build.3213463486.tar (3072 bytes)
I0915 06:51:46.948934   60629 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3213463486
I0915 06:51:46.956746   60629 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3213463486 -xf /var/lib/minikube/build/build.3213463486.tar
I0915 06:51:46.964472   60629 crio.go:315] Building image: /var/lib/minikube/build/build.3213463486
I0915 06:51:46.964549   60629 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-988233 /var/lib/minikube/build/build.3213463486 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0915 06:51:48.107078   60629 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-988233 /var/lib/minikube/build/build.3213463486 --cgroup-manager=cgroupfs: (1.142503121s)
I0915 06:51:48.107142   60629 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3213463486
I0915 06:51:48.115451   60629 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3213463486.tar
I0915 06:51:48.123509   60629 build_images.go:217] Built localhost/my-image:functional-988233 from /tmp/build.3213463486.tar
I0915 06:51:48.123540   60629 build_images.go:133] succeeded building to: functional-988233
I0915 06:51:48.123547   60629 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image rm kicbase/echo-server:functional-988233 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 update-context --alsologtostderr -v=2
E0915 06:52:34.134402   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:01.838837   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-988233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-988233 tunnel --alsologtostderr] ...
E0915 06:57:34.135362   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-988233
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-988233
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-988233
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (149.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-222693 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0915 07:02:34.135464   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:03:57.201010   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-222693 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m29.122146138s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (149.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-222693 -- rollout status deployment/busybox: (2.161768499s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-d8hb8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-gql8m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-jd7nh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-d8hb8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-gql8m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-jd7nh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-d8hb8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-gql8m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-jd7nh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-d8hb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-d8hb8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-gql8m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-gql8m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-jd7nh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-222693 -- exec busybox-7dff88458-jd7nh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-222693 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-222693 -v=7 --alsologtostderr: (29.222416998s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-222693 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp testdata/cp-test.txt ha-222693:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4048286112/001/cp-test_ha-222693.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693:/home/docker/cp-test.txt ha-222693-m02:/home/docker/cp-test_ha-222693_ha-222693-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test_ha-222693_ha-222693-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693:/home/docker/cp-test.txt ha-222693-m03:/home/docker/cp-test_ha-222693_ha-222693-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test.txt"
E0915 07:04:58.307758   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:04:58.314110   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:04:58.325457   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:04:58.346820   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test_ha-222693_ha-222693-m03.txt"
E0915 07:04:58.388786   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:04:58.470238   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693:/home/docker/cp-test.txt ha-222693-m04:/home/docker/cp-test_ha-222693_ha-222693-m04.txt
E0915 07:04:58.631522   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:04:58.953204   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test_ha-222693_ha-222693-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp testdata/cp-test.txt ha-222693-m02:/home/docker/cp-test.txt
E0915 07:04:59.595135   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4048286112/001/cp-test_ha-222693-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m02:/home/docker/cp-test.txt ha-222693:/home/docker/cp-test_ha-222693-m02_ha-222693.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test.txt"
E0915 07:05:00.876632   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test_ha-222693-m02_ha-222693.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m02:/home/docker/cp-test.txt ha-222693-m03:/home/docker/cp-test_ha-222693-m02_ha-222693-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test_ha-222693-m02_ha-222693-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m02:/home/docker/cp-test.txt ha-222693-m04:/home/docker/cp-test_ha-222693-m02_ha-222693-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test_ha-222693-m02_ha-222693-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp testdata/cp-test.txt ha-222693-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test.txt"
E0915 07:05:03.438563   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4048286112/001/cp-test_ha-222693-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m03:/home/docker/cp-test.txt ha-222693:/home/docker/cp-test_ha-222693-m03_ha-222693.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test_ha-222693-m03_ha-222693.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m03:/home/docker/cp-test.txt ha-222693-m02:/home/docker/cp-test_ha-222693-m03_ha-222693-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test_ha-222693-m03_ha-222693-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m03:/home/docker/cp-test.txt ha-222693-m04:/home/docker/cp-test_ha-222693-m03_ha-222693-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test_ha-222693-m03_ha-222693-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp testdata/cp-test.txt ha-222693-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4048286112/001/cp-test_ha-222693-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m04:/home/docker/cp-test.txt ha-222693:/home/docker/cp-test_ha-222693-m04_ha-222693.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693 "sudo cat /home/docker/cp-test_ha-222693-m04_ha-222693.txt"
E0915 07:05:08.560445   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m04:/home/docker/cp-test.txt ha-222693-m02:/home/docker/cp-test_ha-222693-m04_ha-222693-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m02 "sudo cat /home/docker/cp-test_ha-222693-m04_ha-222693-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 cp ha-222693-m04:/home/docker/cp-test.txt ha-222693-m03:/home/docker/cp-test_ha-222693-m04_ha-222693-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 ssh -n ha-222693-m03 "sudo cat /home/docker/cp-test_ha-222693-m04_ha-222693-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 node stop m02 -v=7 --alsologtostderr
E0915 07:05:18.802664   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-222693 node stop m02 -v=7 --alsologtostderr: (11.788139867s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr: exit status 7 (651.720175ms)

                                                
                                                
-- stdout --
	ha-222693
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-222693-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-222693-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-222693-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:05:22.195272   86264 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:05:22.195401   86264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:05:22.195414   86264 out.go:358] Setting ErrFile to fd 2...
	I0915 07:05:22.195421   86264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:05:22.195626   86264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 07:05:22.195870   86264 out.go:352] Setting JSON to false
	I0915 07:05:22.195899   86264 mustload.go:65] Loading cluster: ha-222693
	I0915 07:05:22.195947   86264 notify.go:220] Checking for updates...
	I0915 07:05:22.196506   86264 config.go:182] Loaded profile config "ha-222693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:05:22.196527   86264 status.go:255] checking status of ha-222693 ...
	I0915 07:05:22.196958   86264 cli_runner.go:164] Run: docker container inspect ha-222693 --format={{.State.Status}}
	I0915 07:05:22.215237   86264 status.go:330] ha-222693 host status = "Running" (err=<nil>)
	I0915 07:05:22.215258   86264 host.go:66] Checking if "ha-222693" exists ...
	I0915 07:05:22.215494   86264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-222693
	I0915 07:05:22.231775   86264 host.go:66] Checking if "ha-222693" exists ...
	I0915 07:05:22.232103   86264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:05:22.232148   86264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-222693
	I0915 07:05:22.249253   86264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/ha-222693/id_rsa Username:docker}
	I0915 07:05:22.345334   86264 ssh_runner.go:195] Run: systemctl --version
	I0915 07:05:22.349302   86264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:05:22.359478   86264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:05:22.409638   86264 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-15 07:05:22.40047303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 07:05:22.410257   86264 kubeconfig.go:125] found "ha-222693" server: "https://192.168.49.254:8443"
	I0915 07:05:22.410290   86264 api_server.go:166] Checking apiserver status ...
	I0915 07:05:22.410328   86264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:05:22.421250   86264 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1464/cgroup
	I0915 07:05:22.430224   86264 api_server.go:182] apiserver freezer: "8:freezer:/docker/c0a88f5176b3838801930fc77f8bb72e64fffdf4838d556de210dec63cc08c37/crio/crio-e07d727a43e14c820ecfc60069fc4bf60f011f34e1b8dee54bdcf972d8cf090e"
	I0915 07:05:22.430290   86264 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c0a88f5176b3838801930fc77f8bb72e64fffdf4838d556de210dec63cc08c37/crio/crio-e07d727a43e14c820ecfc60069fc4bf60f011f34e1b8dee54bdcf972d8cf090e/freezer.state
	I0915 07:05:22.437768   86264 api_server.go:204] freezer state: "THAWED"
	I0915 07:05:22.437792   86264 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 07:05:22.442643   86264 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 07:05:22.442664   86264 status.go:422] ha-222693 apiserver status = Running (err=<nil>)
	I0915 07:05:22.442673   86264 status.go:257] ha-222693 status: &{Name:ha-222693 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:05:22.442687   86264 status.go:255] checking status of ha-222693-m02 ...
	I0915 07:05:22.442913   86264 cli_runner.go:164] Run: docker container inspect ha-222693-m02 --format={{.State.Status}}
	I0915 07:05:22.461686   86264 status.go:330] ha-222693-m02 host status = "Stopped" (err=<nil>)
	I0915 07:05:22.461704   86264 status.go:343] host is not running, skipping remaining checks
	I0915 07:05:22.461710   86264 status.go:257] ha-222693-m02 status: &{Name:ha-222693-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:05:22.461735   86264 status.go:255] checking status of ha-222693-m03 ...
	I0915 07:05:22.462004   86264 cli_runner.go:164] Run: docker container inspect ha-222693-m03 --format={{.State.Status}}
	I0915 07:05:22.478579   86264 status.go:330] ha-222693-m03 host status = "Running" (err=<nil>)
	I0915 07:05:22.478602   86264 host.go:66] Checking if "ha-222693-m03" exists ...
	I0915 07:05:22.478832   86264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-222693-m03
	I0915 07:05:22.496045   86264 host.go:66] Checking if "ha-222693-m03" exists ...
	I0915 07:05:22.496380   86264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:05:22.496427   86264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-222693-m03
	I0915 07:05:22.514750   86264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/ha-222693-m03/id_rsa Username:docker}
	I0915 07:05:22.605079   86264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:05:22.616010   86264 kubeconfig.go:125] found "ha-222693" server: "https://192.168.49.254:8443"
	I0915 07:05:22.616037   86264 api_server.go:166] Checking apiserver status ...
	I0915 07:05:22.616084   86264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:05:22.625806   86264 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	I0915 07:05:22.634165   86264 api_server.go:182] apiserver freezer: "8:freezer:/docker/27a32f5792c084572103069d17bdaf25be371c0197307cfa6a593d81e61fa79b/crio/crio-eeac253eb845cfd33a56ce370bb4193e3cc9de288d824c6aa5cc8177d5dec4c5"
	I0915 07:05:22.634230   86264 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27a32f5792c084572103069d17bdaf25be371c0197307cfa6a593d81e61fa79b/crio/crio-eeac253eb845cfd33a56ce370bb4193e3cc9de288d824c6aa5cc8177d5dec4c5/freezer.state
	I0915 07:05:22.641904   86264 api_server.go:204] freezer state: "THAWED"
	I0915 07:05:22.641934   86264 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 07:05:22.645601   86264 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 07:05:22.645623   86264 status.go:422] ha-222693-m03 apiserver status = Running (err=<nil>)
	I0915 07:05:22.645631   86264 status.go:257] ha-222693-m03 status: &{Name:ha-222693-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:05:22.645645   86264 status.go:255] checking status of ha-222693-m04 ...
	I0915 07:05:22.645875   86264 cli_runner.go:164] Run: docker container inspect ha-222693-m04 --format={{.State.Status}}
	I0915 07:05:22.663564   86264 status.go:330] ha-222693-m04 host status = "Running" (err=<nil>)
	I0915 07:05:22.663585   86264 host.go:66] Checking if "ha-222693-m04" exists ...
	I0915 07:05:22.663814   86264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-222693-m04
	I0915 07:05:22.681539   86264 host.go:66] Checking if "ha-222693-m04" exists ...
	I0915 07:05:22.681808   86264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:05:22.681856   86264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-222693-m04
	I0915 07:05:22.699370   86264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/ha-222693-m04/id_rsa Username:docker}
	I0915 07:05:22.793095   86264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:05:22.803291   86264 status.go:257] ha-222693-m04 status: &{Name:ha-222693-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 node start m02 -v=7 --alsologtostderr
E0915 07:05:39.284823   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-222693 node start m02 -v=7 --alsologtostderr: (19.435011532s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (15.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.537057723s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (15.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (186.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-222693 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-222693 -v=7 --alsologtostderr
E0915 07:06:20.247078   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-222693 -v=7 --alsologtostderr: (36.561561558s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-222693 --wait=true -v=7 --alsologtostderr
E0915 07:07:34.135092   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:07:42.168968   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-222693 --wait=true -v=7 --alsologtostderr: (2m30.226269034s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-222693
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (186.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-222693 node delete m03 -v=7 --alsologtostderr: (11.344716784s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-222693 stop -v=7 --alsologtostderr: (35.419655204s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr: exit status 7 (95.837174ms)

                                                
                                                
-- stdout --
	ha-222693
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-222693-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-222693-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:09:53.985588  104165 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:09:53.985720  104165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:53.985730  104165 out.go:358] Setting ErrFile to fd 2...
	I0915 07:09:53.985734  104165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:53.985920  104165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 07:09:53.986184  104165 out.go:352] Setting JSON to false
	I0915 07:09:53.986220  104165 mustload.go:65] Loading cluster: ha-222693
	I0915 07:09:53.986309  104165 notify.go:220] Checking for updates...
	I0915 07:09:53.986716  104165 config.go:182] Loaded profile config "ha-222693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:09:53.986732  104165 status.go:255] checking status of ha-222693 ...
	I0915 07:09:53.987187  104165 cli_runner.go:164] Run: docker container inspect ha-222693 --format={{.State.Status}}
	I0915 07:09:54.004673  104165 status.go:330] ha-222693 host status = "Stopped" (err=<nil>)
	I0915 07:09:54.004696  104165 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:54.004702  104165 status.go:257] ha-222693 status: &{Name:ha-222693 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:54.004737  104165 status.go:255] checking status of ha-222693-m02 ...
	I0915 07:09:54.004987  104165 cli_runner.go:164] Run: docker container inspect ha-222693-m02 --format={{.State.Status}}
	I0915 07:09:54.021544  104165 status.go:330] ha-222693-m02 host status = "Stopped" (err=<nil>)
	I0915 07:09:54.021568  104165 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:54.021573  104165 status.go:257] ha-222693-m02 status: &{Name:ha-222693-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:54.021590  104165 status.go:255] checking status of ha-222693-m04 ...
	I0915 07:09:54.021814  104165 cli_runner.go:164] Run: docker container inspect ha-222693-m04 --format={{.State.Status}}
	I0915 07:09:54.038408  104165 status.go:330] ha-222693-m04 host status = "Stopped" (err=<nil>)
	I0915 07:09:54.038451  104165 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:54.038461  104165 status.go:257] ha-222693-m04 status: &{Name:ha-222693-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (107.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-222693 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0915 07:09:58.307593   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:10:26.011859   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-222693 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m46.75262116s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (107.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-222693 --control-plane -v=7 --alsologtostderr
E0915 07:12:34.135329   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-222693 --control-plane -v=7 --alsologtostderr: (1m5.544663128s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-222693 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-721219 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-721219 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (38.011325151s)
--- PASS: TestJSONOutput/start/Command (38.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-721219 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-721219 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-721219 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-721219 --output=json --user=testUser: (5.716381012s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-744126 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-744126 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.738735ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"41cf5653-67e2-462d-b4ae-9fe11bb21bf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-744126] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"36e1efb4-34e7-4916-a2cc-eb5abf5fc8e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"4d5e256f-1ed9-4c27-8b07-9a69b3da600a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1234c7b7-a31e-4ed0-be6a-75854f3ba9ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig"}}
	{"specversion":"1.0","id":"7c7538d7-7af5-4f02-9ebd-85b4861e69ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube"}}
	{"specversion":"1.0","id":"0f1ec371-0699-4ae5-93ec-bedc3bcc42e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b9516026-22a7-4c9b-99aa-5815eeba12c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e7d0ff3d-97ee-4e6a-830b-5910a47dd5d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-744126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-744126
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-317453 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-317453 --network=: (24.981245743s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-317453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-317453
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-317453: (2.012467979s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.01s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-763226 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-763226 --network=bridge: (24.724247413s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-763226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-763226
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-763226: (1.909299415s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.65s)

                                                
                                    
x
+
TestKicExistingNetwork (23.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-977524 --network=existing-network
E0915 07:14:58.308052   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-977524 --network=existing-network: (21.48772221s)
helpers_test.go:175: Cleaning up "existing-network-977524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-977524
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-977524: (1.888656481s)
--- PASS: TestKicExistingNetwork (23.52s)

                                                
                                    
x
+
TestKicCustomSubnet (25.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-544734 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-544734 --subnet=192.168.60.0/24: (23.757082286s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-544734 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-544734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-544734
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-544734: (2.034996427s)
--- PASS: TestKicCustomSubnet (25.81s)

                                                
                                    
x
+
TestKicStaticIP (25.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-594779 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-594779 --static-ip=192.168.200.200: (23.66929642s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-594779 ip
helpers_test.go:175: Cleaning up "static-ip-594779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-594779
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-594779: (2.052333865s)
--- PASS: TestKicStaticIP (25.84s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (51.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-458586 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-458586 --driver=docker  --container-runtime=crio: (22.485383522s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-470030 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-470030 --driver=docker  --container-runtime=crio: (23.783300543s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-458586
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-470030
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-470030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-470030
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-470030: (1.808704147s)
helpers_test.go:175: Cleaning up "first-458586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-458586
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-458586: (2.137893479s)
--- PASS: TestMinikubeProfile (51.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-921586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-921586 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.480193688s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-921586 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-934344 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-934344 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.200139078s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-934344 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-921586 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-921586 --alsologtostderr -v=5: (1.580440901s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-934344 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-934344
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-934344: (1.16849223s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-934344
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-934344: (6.160923942s)
--- PASS: TestMountStart/serial/RestartStopped (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-934344 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-192059 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0915 07:17:34.135182   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-192059 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.577411299s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-192059 -- rollout status deployment/busybox: (2.148789783s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-frm82 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-qqrd8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-frm82 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-qqrd8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-frm82 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-qqrd8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.55s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-frm82 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-frm82 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-qqrd8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-192059 -- exec busybox-7dff88458-qqrd8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-192059 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-192059 -v 3 --alsologtostderr: (28.411139119s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-192059 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp testdata/cp-test.txt multinode-192059:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2665149417/001/cp-test_multinode-192059.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059:/home/docker/cp-test.txt multinode-192059-m02:/home/docker/cp-test_multinode-192059_multinode-192059-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m02 "sudo cat /home/docker/cp-test_multinode-192059_multinode-192059-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059:/home/docker/cp-test.txt multinode-192059-m03:/home/docker/cp-test_multinode-192059_multinode-192059-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m03 "sudo cat /home/docker/cp-test_multinode-192059_multinode-192059-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp testdata/cp-test.txt multinode-192059-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2665149417/001/cp-test_multinode-192059-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059-m02:/home/docker/cp-test.txt multinode-192059:/home/docker/cp-test_multinode-192059-m02_multinode-192059.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059 "sudo cat /home/docker/cp-test_multinode-192059-m02_multinode-192059.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059-m02:/home/docker/cp-test.txt multinode-192059-m03:/home/docker/cp-test_multinode-192059-m02_multinode-192059-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m03 "sudo cat /home/docker/cp-test_multinode-192059-m02_multinode-192059-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp testdata/cp-test.txt multinode-192059-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2665149417/001/cp-test_multinode-192059-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059-m03:/home/docker/cp-test.txt multinode-192059:/home/docker/cp-test_multinode-192059-m03_multinode-192059.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059 "sudo cat /home/docker/cp-test_multinode-192059-m03_multinode-192059.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 cp multinode-192059-m03:/home/docker/cp-test.txt multinode-192059-m02:/home/docker/cp-test_multinode-192059-m03_multinode-192059-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 ssh -n multinode-192059-m02 "sudo cat /home/docker/cp-test_multinode-192059-m03_multinode-192059-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-192059 node stop m03: (1.16829932s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-192059 status: exit status 7 (449.371524ms)

                                                
                                                
-- stdout --
	multinode-192059
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-192059-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-192059-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr: exit status 7 (456.463149ms)

                                                
                                                
-- stdout --
	multinode-192059
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-192059-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-192059-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:19:02.001551  169633 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:19:02.001671  169633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:19:02.001679  169633 out.go:358] Setting ErrFile to fd 2...
	I0915 07:19:02.001683  169633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:19:02.001887  169633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 07:19:02.002041  169633 out.go:352] Setting JSON to false
	I0915 07:19:02.002066  169633 mustload.go:65] Loading cluster: multinode-192059
	I0915 07:19:02.002116  169633 notify.go:220] Checking for updates...
	I0915 07:19:02.002451  169633 config.go:182] Loaded profile config "multinode-192059": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:19:02.002464  169633 status.go:255] checking status of multinode-192059 ...
	I0915 07:19:02.002894  169633 cli_runner.go:164] Run: docker container inspect multinode-192059 --format={{.State.Status}}
	I0915 07:19:02.023873  169633 status.go:330] multinode-192059 host status = "Running" (err=<nil>)
	I0915 07:19:02.023898  169633 host.go:66] Checking if "multinode-192059" exists ...
	I0915 07:19:02.024132  169633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-192059
	I0915 07:19:02.040946  169633 host.go:66] Checking if "multinode-192059" exists ...
	I0915 07:19:02.041204  169633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:19:02.041243  169633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-192059
	I0915 07:19:02.059068  169633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/multinode-192059/id_rsa Username:docker}
	I0915 07:19:02.149579  169633 ssh_runner.go:195] Run: systemctl --version
	I0915 07:19:02.153749  169633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:19:02.164301  169633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:19:02.213236  169633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-15 07:19:02.204016543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 07:19:02.213801  169633 kubeconfig.go:125] found "multinode-192059" server: "https://192.168.67.2:8443"
	I0915 07:19:02.213827  169633 api_server.go:166] Checking apiserver status ...
	I0915 07:19:02.213865  169633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:19:02.224158  169633 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1473/cgroup
	I0915 07:19:02.233287  169633 api_server.go:182] apiserver freezer: "8:freezer:/docker/1f08839769dc0119cdcf5871d4f50d9d3570f56072efc14c4ce4d0ab6d35dd50/crio/crio-850d9b99deb9d37908e40aae8bb648215271046d5f9ab813eb58bfffd0d0e3c2"
	I0915 07:19:02.233351  169633 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1f08839769dc0119cdcf5871d4f50d9d3570f56072efc14c4ce4d0ab6d35dd50/crio/crio-850d9b99deb9d37908e40aae8bb648215271046d5f9ab813eb58bfffd0d0e3c2/freezer.state
	I0915 07:19:02.241459  169633 api_server.go:204] freezer state: "THAWED"
	I0915 07:19:02.241494  169633 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0915 07:19:02.245140  169633 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0915 07:19:02.245162  169633 status.go:422] multinode-192059 apiserver status = Running (err=<nil>)
	I0915 07:19:02.245171  169633 status.go:257] multinode-192059 status: &{Name:multinode-192059 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:19:02.245205  169633 status.go:255] checking status of multinode-192059-m02 ...
	I0915 07:19:02.245465  169633 cli_runner.go:164] Run: docker container inspect multinode-192059-m02 --format={{.State.Status}}
	I0915 07:19:02.262329  169633 status.go:330] multinode-192059-m02 host status = "Running" (err=<nil>)
	I0915 07:19:02.262350  169633 host.go:66] Checking if "multinode-192059-m02" exists ...
	I0915 07:19:02.262683  169633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-192059-m02
	I0915 07:19:02.280877  169633 host.go:66] Checking if "multinode-192059-m02" exists ...
	I0915 07:19:02.281119  169633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:19:02.281163  169633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-192059-m02
	I0915 07:19:02.297771  169633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19644-5979/.minikube/machines/multinode-192059-m02/id_rsa Username:docker}
	I0915 07:19:02.388907  169633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:19:02.399000  169633 status.go:257] multinode-192059-m02 status: &{Name:multinode-192059-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:19:02.399044  169633 status.go:255] checking status of multinode-192059-m03 ...
	I0915 07:19:02.399296  169633 cli_runner.go:164] Run: docker container inspect multinode-192059-m03 --format={{.State.Status}}
	I0915 07:19:02.415988  169633 status.go:330] multinode-192059-m03 host status = "Stopped" (err=<nil>)
	I0915 07:19:02.416014  169633 status.go:343] host is not running, skipping remaining checks
	I0915 07:19:02.416022  169633 status.go:257] multinode-192059-m03 status: &{Name:multinode-192059-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-192059 node start m03 -v=7 --alsologtostderr: (8.75467215s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-192059
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-192059
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-192059: (24.614929111s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-192059 --wait=true -v=8 --alsologtostderr
E0915 07:19:58.307253   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:20:37.202803   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-192059 --wait=true -v=8 --alsologtostderr: (1m17.98653596s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-192059
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-192059 node delete m03: (4.610934485s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 stop
E0915 07:21:21.376105   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-192059 stop: (23.502527009s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-192059 status: exit status 7 (84.39027ms)

                                                
                                                
-- stdout --
	multinode-192059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-192059-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr: exit status 7 (77.999177ms)

                                                
                                                
-- stdout --
	multinode-192059
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-192059-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:21:23.307440  179468 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:21:23.307543  179468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:21:23.307557  179468 out.go:358] Setting ErrFile to fd 2...
	I0915 07:21:23.307562  179468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:21:23.307748  179468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 07:21:23.307918  179468 out.go:352] Setting JSON to false
	I0915 07:21:23.307947  179468 mustload.go:65] Loading cluster: multinode-192059
	I0915 07:21:23.308065  179468 notify.go:220] Checking for updates...
	I0915 07:21:23.308505  179468 config.go:182] Loaded profile config "multinode-192059": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:21:23.308532  179468 status.go:255] checking status of multinode-192059 ...
	I0915 07:21:23.309006  179468 cli_runner.go:164] Run: docker container inspect multinode-192059 --format={{.State.Status}}
	I0915 07:21:23.326099  179468 status.go:330] multinode-192059 host status = "Stopped" (err=<nil>)
	I0915 07:21:23.326118  179468 status.go:343] host is not running, skipping remaining checks
	I0915 07:21:23.326123  179468 status.go:257] multinode-192059 status: &{Name:multinode-192059 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:21:23.326153  179468 status.go:255] checking status of multinode-192059-m02 ...
	I0915 07:21:23.326380  179468 cli_runner.go:164] Run: docker container inspect multinode-192059-m02 --format={{.State.Status}}
	I0915 07:21:23.343646  179468 status.go:330] multinode-192059-m02 host status = "Stopped" (err=<nil>)
	I0915 07:21:23.343668  179468 status.go:343] host is not running, skipping remaining checks
	I0915 07:21:23.343674  179468 status.go:257] multinode-192059-m02 status: &{Name:multinode-192059-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-192059 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-192059 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (49.325589354s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-192059 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-192059
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-192059-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-192059-m02 --driver=docker  --container-runtime=crio: exit status 14 (63.227791ms)

                                                
                                                
-- stdout --
	* [multinode-192059-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-192059-m02' is duplicated with machine name 'multinode-192059-m02' in profile 'multinode-192059'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-192059-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-192059-m03 --driver=docker  --container-runtime=crio: (20.285947867s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-192059
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-192059: exit status 80 (272.264245ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-192059 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-192059-m03 already exists in multinode-192059-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-192059-m03
E0915 07:22:34.134360   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-192059-m03: (1.807747619s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.47s)

                                                
                                    
x
+
TestPreload (101.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-786958 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-786958 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m15.603934001s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-786958 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-786958
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-786958: (5.610731865s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-786958 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-786958 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.749978605s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-786958 image list
helpers_test.go:175: Cleaning up "test-preload-786958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-786958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-786958: (2.28696063s)
--- PASS: TestPreload (101.40s)

                                                
                                    
x
+
TestScheduledStopUnix (97.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-199638 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-199638 --memory=2048 --driver=docker  --container-runtime=crio: (21.026748356s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-199638 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-199638 -n scheduled-stop-199638
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-199638 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-199638 --cancel-scheduled
E0915 07:24:58.308094   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-199638 -n scheduled-stop-199638
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-199638
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-199638 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-199638
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-199638: exit status 7 (60.188413ms)

                                                
                                                
-- stdout --
	scheduled-stop-199638
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-199638 -n scheduled-stop-199638
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-199638 -n scheduled-stop-199638: exit status 7 (61.134166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-199638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-199638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-199638: (5.131170534s)
--- PASS: TestScheduledStopUnix (97.40s)

                                                
                                    
x
+
TestInsufficientStorage (12.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-405538 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-405538 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.13051374s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"96dc40b1-3296-4f15-90d0-31229a45daca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-405538] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"800aa209-b363-408b-a0dd-2a14a1d6ae14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"19f7615f-b113-47fd-994c-012ac69c8fc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41d2eaaa-bc0a-4230-8fa9-f8ee73cc8069","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig"}}
	{"specversion":"1.0","id":"d019345f-a1f6-4532-9ec1-edd16faed2d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube"}}
	{"specversion":"1.0","id":"50d5656c-ab9e-470c-bcce-4ed19452439f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bc7e9edf-a6e8-4e8d-9b0a-5139bb7e7ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a350f8f2-9c1d-48bc-b04d-45cdef20387f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e7ef38a2-a3ef-4a16-ab3e-15794f33209c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c70d2bf2-6d00-463c-a430-646f9f7f5ff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa3f7a0e-6bee-41a5-b04b-1ad493a9275d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ca9421bc-c2b0-4a8d-a291-fb1af1b1601f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-405538\" primary control-plane node in \"insufficient-storage-405538\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf1dd5d3-a268-4448-9147-112eb20674af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"87cf2974-bc09-407b-95d1-74fc4825ff19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f73a923-896c-48d9-983d-c1e6f74d935c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-405538 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-405538 --output=json --layout=cluster: exit status 7 (252.134864ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-405538","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-405538","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:26:08.669019  201857 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-405538" does not appear in /home/jenkins/minikube-integration/19644-5979/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-405538 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-405538 --output=json --layout=cluster: exit status 7 (249.954353ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-405538","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-405538","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:26:08.919671  201957 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-405538" does not appear in /home/jenkins/minikube-integration/19644-5979/kubeconfig
	E0915 07:26:08.929189  201957 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/insufficient-storage-405538/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-405538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-405538
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-405538: (1.798518468s)
--- PASS: TestInsufficientStorage (12.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (57.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.85317995 start -p running-upgrade-402217 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0915 07:27:34.135364   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.85317995 start -p running-upgrade-402217 --memory=2200 --vm-driver=docker  --container-runtime=crio: (31.446733822s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-402217 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-402217 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.459319868s)
helpers_test.go:175: Cleaning up "running-upgrade-402217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-402217
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-402217: (2.455393463s)
--- PASS: TestRunningBinaryUpgrade (57.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.830478994s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-546714
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-546714: (2.851609101s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-546714 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-546714 status --format={{.Host}}: exit status 7 (88.593143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.875994786s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-546714 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (100.579116ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-546714] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-546714
	    minikube start -p kubernetes-upgrade-546714 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5467142 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-546714 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-546714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.584321984s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-546714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-546714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-546714: (4.269698811s)
--- PASS: TestKubernetesUpgrade (347.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3961483548 start -p missing-upgrade-559701 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3961483548 start -p missing-upgrade-559701 --memory=2200 --driver=docker  --container-runtime=crio: (1m5.479161807s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-559701
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-559701: (3.382605869s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-559701
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-559701 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-559701 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (53.061388971s)
helpers_test.go:175: Cleaning up "missing-upgrade-559701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-559701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-559701: (8.191919937s)
--- PASS: TestMissingContainerUpgrade (130.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-580368 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-580368 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (67.383116ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-580368] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-580368 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-580368 --driver=docker  --container-runtime=crio: (33.931686208s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-580368 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (89.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1466315796 start -p stopped-upgrade-591335 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1466315796 start -p stopped-upgrade-591335 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.622894907s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1466315796 -p stopped-upgrade-591335 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1466315796 -p stopped-upgrade-591335 stop: (2.486708296s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-591335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-591335 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.541677835s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (89.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-580368 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-580368 --no-kubernetes --driver=docker  --container-runtime=crio: (9.96347301s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-580368 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-580368 status -o json: exit status 2 (277.889784ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-580368","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-580368
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-580368: (1.87478072s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-580368 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-580368 --no-kubernetes --driver=docker  --container-runtime=crio: (7.767628962s)
--- PASS: TestNoKubernetes/serial/Start (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-580368 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-580368 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.78641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (11.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (10.534350227s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (11.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-580368
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-580368: (4.059624592s)
--- PASS: TestNoKubernetes/serial/Stop (4.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-580368 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-580368 --driver=docker  --container-runtime=crio: (9.800665296s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-580368 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-580368 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.776702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-591335
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-591335: (1.145826339s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (76.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-377398 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-377398 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m16.507267569s)
--- PASS: TestPause/serial/Start (76.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-609794 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-609794 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (154.016523ms)

                                                
                                                
-- stdout --
	* [false-609794] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:28:53.894169  246151 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:28:53.894340  246151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:28:53.894350  246151 out.go:358] Setting ErrFile to fd 2...
	I0915 07:28:53.894355  246151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:28:53.894640  246151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-5979/.minikube/bin
	I0915 07:28:53.895398  246151 out.go:352] Setting JSON to false
	I0915 07:28:53.896654  246151 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4285,"bootTime":1726381049,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0915 07:28:53.896748  246151 start.go:139] virtualization: kvm guest
	I0915 07:28:53.898954  246151 out.go:177] * [false-609794] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0915 07:28:53.902047  246151 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:28:53.902078  246151 notify.go:220] Checking for updates...
	I0915 07:28:53.904713  246151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:28:53.905976  246151 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-5979/kubeconfig
	I0915 07:28:53.907388  246151 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-5979/.minikube
	I0915 07:28:53.908795  246151 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0915 07:28:53.910139  246151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:28:53.912131  246151 config.go:182] Loaded profile config "cert-expiration-370110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:28:53.912309  246151 config.go:182] Loaded profile config "kubernetes-upgrade-546714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:28:53.912450  246151 config.go:182] Loaded profile config "pause-377398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0915 07:28:53.912582  246151 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:28:53.940353  246151 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 07:28:53.940455  246151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:28:53.995931  246151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-15 07:28:53.98605815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0915 07:28:53.996039  246151 docker.go:318] overlay module found
	I0915 07:28:53.997953  246151 out.go:177] * Using the docker driver based on user configuration
	I0915 07:28:53.999278  246151 start.go:297] selected driver: docker
	I0915 07:28:53.999290  246151 start.go:901] validating driver "docker" against <nil>
	I0915 07:28:53.999301  246151 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:28:54.001997  246151 out.go:201] 
	W0915 07:28:54.003642  246151 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0915 07:28:54.005069  246151 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-609794 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-609794" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-370110
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-546714
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-377398
contexts:
- context:
cluster: cert-expiration-370110
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-370110
name: cert-expiration-370110
- context:
cluster: kubernetes-upgrade-546714
user: kubernetes-upgrade-546714
name: kubernetes-upgrade-546714
- context:
cluster: pause-377398
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-377398
name: pause-377398
current-context: cert-expiration-370110
kind: Config
preferences: {}
users:
- name: cert-expiration-370110
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/cert-expiration-370110/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/cert-expiration-370110/client.key
- name: kubernetes-upgrade-546714
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/kubernetes-upgrade-546714/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/kubernetes-upgrade-546714/client.key
- name: pause-377398
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/pause-377398/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/pause-377398/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-609794

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-609794"

                                                
                                                
----------------------- debugLogs end: false-609794 [took: 2.7123679s] --------------------------------
helpers_test.go:175: Cleaning up "false-609794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-609794
--- PASS: TestNetworkPlugins/group/false (3.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-377398 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-377398 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.779981682s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.80s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-377398 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-377398 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-377398 --output=json --layout=cluster: exit status 2 (291.329129ms)

                                                
                                                
-- stdout --
	{"Name":"pause-377398","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-377398","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-377398 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-377398 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.33s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-377398 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-377398 --alsologtostderr -v=5: (2.334698539s)
--- PASS: TestPause/serial/DeletePaused (2.33s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.278290981s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-377398
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-377398: exit status 1 (20.738599ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-377398: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (143.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-423109 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-423109 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m23.245750709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (143.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-688254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0915 07:29:58.307616   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-688254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m11.234295261s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-688254 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f138a393-a535-461f-b407-8943779ec602] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f138a393-a535-461f-b407-8943779ec602] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00355057s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-688254 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-688254 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-688254 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-688254 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-688254 --alsologtostderr -v=3: (11.849339898s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-688254 -n embed-certs-688254
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-688254 -n embed-certs-688254: exit status 7 (64.203875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-688254 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (261.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-688254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-688254 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m21.665184629s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-688254 -n embed-certs-688254
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (261.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-455004 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-455004 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (52.578027256s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-423109 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b5b4249-55fa-4c8c-8a41-1edb61ac1c6e] Pending
helpers_test.go:344: "busybox" [1b5b4249-55fa-4c8c-8a41-1edb61ac1c6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b5b4249-55fa-4c8c-8a41-1edb61ac1c6e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004066164s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-423109 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-423109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-423109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.373584003s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-423109 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-423109 --alsologtostderr -v=3
E0915 07:32:34.134806   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-423109 --alsologtostderr -v=3: (13.256669539s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-423109 -n old-k8s-version-423109
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-423109 -n old-k8s-version-423109: exit status 7 (67.70487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-423109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (138.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-423109 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-423109 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m17.907244041s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-423109 -n old-k8s-version-423109
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (138.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-784745 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-784745 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m11.656704519s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-455004 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7b774f22-75cf-47b7-b6b7-5a488d37d88a] Pending
helpers_test.go:344: "busybox" [7b774f22-75cf-47b7-b6b7-5a488d37d88a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7b774f22-75cf-47b7-b6b7-5a488d37d88a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004180806s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-455004 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-455004 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-455004 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-455004 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-455004 --alsologtostderr -v=3: (12.968087263s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-455004 -n no-preload-455004
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-455004 -n no-preload-455004: exit status 7 (74.157295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-455004 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-455004 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-455004 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.010178484s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-455004 -n no-preload-455004
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-784745 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [df92c37a-1157-4ee9-8e16-c6ea3fdafc18] Pending
helpers_test.go:344: "busybox" [df92c37a-1157-4ee9-8e16-c6ea3fdafc18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [df92c37a-1157-4ee9-8e16-c6ea3fdafc18] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003487896s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-784745 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-784745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-784745 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-784745 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-784745 --alsologtostderr -v=3: (11.848205861s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745: exit status 7 (67.541921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-784745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-784745 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-784745 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m23.022189041s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d6b6t" [0424e4d7-602b-404b-9a59-fe9efee906e6] Running
E0915 07:34:58.307283   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/functional-988233/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003660785s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d6b6t" [0424e4d7-602b-404b-9a59-fe9efee906e6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003231752s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-423109 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-423109 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-423109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-423109 -n old-k8s-version-423109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-423109 -n old-k8s-version-423109: exit status 2 (285.640767ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-423109 -n old-k8s-version-423109
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-423109 -n old-k8s-version-423109: exit status 2 (287.007046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-423109 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-423109 -n old-k8s-version-423109
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-423109 -n old-k8s-version-423109
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-784468 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-784468 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (28.263038664s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-784468 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-784468 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-784468 --alsologtostderr -v=3: (1.21718931s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-784468 -n newest-cni-784468
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-784468 -n newest-cni-784468: exit status 7 (63.074562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-784468 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-784468 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-784468 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (12.443199308s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-784468 -n newest-cni-784468
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9v7d2" [31e0a7fb-b471-4232-9098-01a303465e85] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004542849s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-784468 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-784468 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-784468 -n newest-cni-784468
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-784468 -n newest-cni-784468: exit status 2 (281.407066ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-784468 -n newest-cni-784468
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-784468 -n newest-cni-784468: exit status 2 (282.863929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-784468 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-784468 -n newest-cni-784468
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-784468 -n newest-cni-784468
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9v7d2" [31e0a7fb-b471-4232-9098-01a303465e85] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004101692s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-688254 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.742751589s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-688254 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-688254 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-688254 -n embed-certs-688254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-688254 -n embed-certs-688254: exit status 2 (307.460083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-688254 -n embed-certs-688254
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-688254 -n embed-certs-688254: exit status 2 (300.509518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-688254 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-688254 -n embed-certs-688254
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-688254 -n embed-certs-688254
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (39.323205486s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (39.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-609794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-609794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ngt4j" [d73949cc-163a-4425-9a77-f9da8b5a21e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ngt4j" [d73949cc-163a-4425-9a77-f9da8b5a21e8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004495461s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dfkxq" [7f18786c-d1e6-4d94-8cd7-e72601994def] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003520825s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-609794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-609794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-609794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-672zt" [5a1ba1d0-694a-4d66-b680-45dcb5a5a84a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-672zt" [5a1ba1d0-694a-4d66-b680-45dcb5a5a84a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004013245s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-609794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0915 07:37:11.534560   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:11.540991   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:11.552323   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:11.573780   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:11.615269   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:11.697089   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:11.858773   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:12.180774   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:12.822659   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:14.104161   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:16.665491   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:17.204733   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.808967305s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0915 07:37:21.787213   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:32.029529   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:34.135320   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/addons-022322/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (47.846062405s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wscms" [d1ab3d8a-10cc-42fa-b3c7-152e371f7845] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004282862s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wscms" [d1ab3d8a-10cc-42fa-b3c7-152e371f7845] Running
E0915 07:37:52.511240   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004388175s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-455004 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-455004 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-455004 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-455004 -n no-preload-455004
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-455004 -n no-preload-455004: exit status 2 (280.067415ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-455004 -n no-preload-455004
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-455004 -n no-preload-455004: exit status 2 (287.42676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-455004 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-455004 -n no-preload-455004
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-455004 -n no-preload-455004
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m6.333311141s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qvvjl" [753ad816-987a-43e4-8aa3-da3f713285eb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006016467s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-609794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-609794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5l9pb" [70512dc9-79f7-42a3-93bc-f72639a2bb82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5l9pb" [70512dc9-79f7-42a3-93bc-f72639a2bb82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004079915s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-609794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-609794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vjlb4" [c61b5085-b554-40f5-a336-bcca8f37f864] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vjlb4" [c61b5085-b554-40f5-a336-bcca8f37f864] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004969744s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-609794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-609794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.689836616s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-609794 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.256796869s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r6db2" [980cbeed-2d4f-454e-9896-60d44051de32] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003507428s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r6db2" [980cbeed-2d4f-454e-9896-60d44051de32] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003115148s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-784745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-784745 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-784745 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745: exit status 2 (334.213632ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745: exit status 2 (349.625643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-784745 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-784745 -n default-k8s-diff-port-784745
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-609794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-609794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n6g6l" [996b51da-0299-481c-9d0b-41a50ec4e5a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n6g6l" [996b51da-0299-481c-9d0b-41a50ec4e5a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003884682s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-609794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lx7lb" [60f17952-b1ca-4996-b4dc-2c5842b74d1d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004901565s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-609794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-609794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hqr7g" [70ed0846-827f-4707-9415-92217dd28b60] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hqr7g" [70ed0846-827f-4707-9415-92217dd28b60] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006688548s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-609794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-609794 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-609794 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h88nt" [18b9c050-900a-4479-a21b-92fa1479e6ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h88nt" [18b9c050-900a-4479-a21b-92fa1479e6ef] Running
E0915 07:39:55.395521   12591 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/old-k8s-version-423109/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004070362s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-609794 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-609794 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (25/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-133123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-133123
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-609794 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-609794" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-370110
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-546714
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-377398
contexts:
- context:
cluster: cert-expiration-370110
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-370110
name: cert-expiration-370110
- context:
cluster: kubernetes-upgrade-546714
user: kubernetes-upgrade-546714
name: kubernetes-upgrade-546714
- context:
cluster: pause-377398
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-377398
name: pause-377398
current-context: cert-expiration-370110
kind: Config
preferences: {}
users:
- name: cert-expiration-370110
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/cert-expiration-370110/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/cert-expiration-370110/client.key
- name: kubernetes-upgrade-546714
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/kubernetes-upgrade-546714/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/kubernetes-upgrade-546714/client.key
- name: pause-377398
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/pause-377398/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/pause-377398/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-609794

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-609794"

                                                
                                                
----------------------- debugLogs end: kubenet-609794 [took: 2.762472609s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-609794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-609794
--- SKIP: TestNetworkPlugins/group/kubenet (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-609794 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-609794" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-370110
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:03 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-546714
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19644-5979/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-377398
contexts:
- context:
cluster: cert-expiration-370110
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:50 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-370110
name: cert-expiration-370110
- context:
cluster: kubernetes-upgrade-546714
user: kubernetes-upgrade-546714
name: kubernetes-upgrade-546714
- context:
cluster: pause-377398
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-377398
name: pause-377398
current-context: cert-expiration-370110
kind: Config
preferences: {}
users:
- name: cert-expiration-370110
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/cert-expiration-370110/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/cert-expiration-370110/client.key
- name: kubernetes-upgrade-546714
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/kubernetes-upgrade-546714/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/kubernetes-upgrade-546714/client.key
- name: pause-377398
user:
client-certificate: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/pause-377398/client.crt
client-key: /home/jenkins/minikube-integration/19644-5979/.minikube/profiles/pause-377398/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-609794

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-609794" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-609794"

                                                
                                                
----------------------- debugLogs end: cilium-609794 [took: 2.94001017s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-609794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-609794
--- SKIP: TestNetworkPlugins/group/cilium (3.09s)

                                                
                                    
Copied to clipboard